Remarks
U.S. Department of State Third Annual Conference on Program Evaluation - Programs Track
Washington, DC
June 9, 2010


[ Get Adobe Reader View slide presentation ]

DR. BREWER: (In progress) And she hands me this week-long schedule. And it's this laundry list of meetings and receptions and other lectures that they had scheduled me to do while I was there for the real purpose of doing the keynote lecture at this conference.

And so during the course of my time in Korea, I met with university students at American quarters, with NGO leaders and civil society leaders who were trying to increase women's public leadership and interest in public leadership. There was sort of a vibrant kind of energy to the women's movement and some of the women's rights work that was being done in South Korea at the time.

And then there was receptions at the ambassador's residence, and just a whole host of different kinds of activities that U.S. speakers engage in that bring them directly in touch with the foreign publics that they're tasked to go out and speak to.

So that, I think, just kind of gives you all an example of the tool itself. In addition to the traveling speakers program �'�' slide �'�' there's also other ways that IIP engages foreign audiences with this program. And the other primary way is through the digital videoconference.

I did a little bit of additional videoconferencing in my past life as well. Most notably, I did this wonderful conference with some women in Palestine. And it came here to State and got set up. We weren't in the SA building �'�' I think it was over in SA-44 �'�' and had to find my around that madness over there, and sat in front of the TV and talked to these activists in the Palestinian territories about some work they were doing to advance women's leadership in their community.

So the IIP speaker program does operate through a number of vehicles. A predominate source of their programming is sending U.S. speakers abroad. We also engage through digital videoconference; tele-press conferences, which traditionally have been conference calls with journalists; and now also, increasingly, through web chats, webcasts, and podcasts.

What we'd also like to highlight just really quickly, before we move on to our approach to the evaluation, is that the Strategic Speakers Initiative was a new element of the program that was founded in 2006 that was really organized around seven strategic themes here in Washington.

And what they were wanting to do is have sort of a little bit of programmatic control over some particular key strategic priorities here from the Under Secretary's perspective, and to send out speakers to talk about those priorities.

And so that was started in 2006. Our evaluation is charged to �'�' is tasked to see how that is enhancing the program and what kind of added benefits the Strategic Speakers Initiative has. The qualifier there is that the speaker themselves needs to travel to more than one post on their trip overseas. And so you don't just go to South Korea, to Seoul; you have to �'�' it's required that you go to multiple posts.

And for smaller countries and smaller posts, it's been a real collaborative effort, then, with other countries in their area to bring key speakers to their communities. Slide.

Oh, I'm sorry. Before I move on to Jason and talking a little bit about the overall evaluation, I think just at this point it's going to be very helpful for you all to know that the overall program evaluation approach is a retrospective assessment.

From 1994, when this particular iteration of the speakers program was started in IIP, to present, of the U.S. Speaker and Specialists Program �'�' and we include six countries, 11 posts, across five key respondent groups that we'll be talking to you about in more detail, and we're doing both a quantitative and qualitative analysis.

And what we're going to do in this presentation is really walk you through the process of our evaluation. We're currently in core data collection, so we have no answers to the "why" questions. We're going to really just be outlining the "how" today.

So with that, I'll turn it over to Jason.

MR. KEMP: Thank you very much, Sarah, and hello, everyone. Good morning. I want to spend just a few minutes talking about our approach, in collaboration with R/PPR EMU, in looking at this particular study and this particular evaluation.

And I can say that what you see here is a pyramid. And as you know, as being Booz Allen Hamilton, we're very big fans of coming up with creative visual representations of how we do our work on a slide.

And I can tell you that this constitutes sort of the baseline, but also represents �'�' there's a lot behind this pyramid. And I probably had a lot more hair when we started developing what is now the pyramid, but nonetheless, I think it was very helpful in the sense of collaboration between ourselves and R/PPR EMU to sort of develop this overall framework.

And if you look at the slide, the first piece of the pyramid revolves around the program study. And the program study is looking at the IIP speaker program and looking at the processes, and the various actors involved, and the overall facilitation of the IIP speaker program itself.

The second piece, or the next layer or the next part of the pyramid, has to do with a needs assessment. And this is really looking at where the program is today and where all the stakeholders associated with the program would like the program to be. So this is, for lack of a better term, where we look at it as sort of a gap analysis piece of the overall study.

And then the third piece is the impact analysis piece. And I have to tell you, as a project manager, I am an equal fan of all three of these pieces. But if I had to pick one that I was extremely enthusiastic about, I would pick the impact analysis piece.

And the reason �'�' there's a couple reasons. First is, one of the things that we're doing as an organization, in consultation with Sarah and Cherrika Montgomery and the folks at R/PPR EMU, is looking at how to use different types of tools, different types of methodologies and approaches, to really take a look at the impact of the IIP speaker program.

Now, that presents some challenges that we'll get into later. But it certainly also presents some opportunities as well. And one of the things that we're using as a firm is elements of social network analysis that we really feel will allow us to get some insight into the impact of the program. And we'll be talking about that in more detail later on. So I just wanted to provide you with a brief overview of our sort of theoretical framework for this engagement.

Sarah?

DR. BREWER: So on the next slide, we really outline the presentation that we will be giving you today about the process of this evaluation. We're going to start with a discussion of stakeholder engagement, talking about then our research methodologies, then the data collection, and finally, just touching a bit on analysis for you, again reiterating that we're currently in core data collection, and so we're really just going to be describing to you the work that we've done so far in this effort.

It's a 22-month engagement. I have no idea what month we're in, but we're right in the middle of it all. Right?

MR. KEMP: Yes, we are.

DR. BREWER: So with that, let's start with stakeholder engagement. This is a key piece to ensuring the success of this effort. And early on in our collaboration with Booz Allen Hamilton, there was a great deal of work that went into identifying our key stakeholders and making sure that we reached out to them in the appropriate ways and gathered the information that we needed to get from them.

And what I wanted to just share in terms of this particular program evaluation is our three key stakeholders that we've done a lot of work with up until this point, and continued to engage.

The first, of course, is the IIP Speaker and Specialists Program leadership, and so its director, Paul Denning, its deputy director, Estelle Baird, and then also its key program staff. And in-depth meetings about the program, its processes, its history, were instrumental in providing the foundation for the data collection phase that we're in now. And without that close collaboration with the leadership, it would have been a much bigger challenge as evaluators to be successful in this effort.

The second key stakeholder is the regional bureaus. And they have strategic interests in the project itself, in the program as a PD tool, and then also the countries that we're going in to collect data. And so we had a very strong collaboration with them and the other three �'�' the other two stakeholders listed around country selection. I'm sorry, can we have the next slide, please? Thank you. Here we are.

And so the regional bureaus really played a key role in our country selection in terms of their strategic interests as we're looking to go abroad to collect data in the field.

And finally, the importance of the U.S. embassy and consulate past section leadership in order to have a successful effort. Without the incredible work of the PAOs in the countries that we are in, we again would have much stronger challenges facing us as evaluators.

And so really finding the posts where we have collaborative partners and people who have time and interest and energy in working with us on this project was instrumental.

And what I wanted to also just say with this in terms of just sharing with you all some information about an evaluation process, case selection was one of the �'�' is a huge piece of an evaluation effort, and how to choose our cases is a really important exercise early on in our process.

And the engagement of these three key stakeholder groups, along with the expertise of my director and my team here at State, as well as Booz Allen and their research methodologists, that was a very important success, I believe, in this evaluation.

We had to choose cases that had important research variation in the use of the program in particular, large posts, small posts, so we can really make some meaningful recommendations and also gather some important data about how the program is performing.

But in addition, the program leadership had interest in going into particular countries where they might be doing different kinds of work with one post. And the regional bureaus had their interests in some strategic questions and particular audiences that they were interested in gathering data from.

And then, of course, the past section �'�' even if we wanted to go there, depending on �'�' you know, the President was going to come and visit there so they didn't really want to have another effort coming into their country. And all of those things, I think, is a big part of what Jason and I do, to try and balance and see to it that we have really strong cases at the end.

I will just share with you all that there was a woman in my office who said to me that I will not make everyone happy. And I said, you watch me because that's the end. That's the goal. And not everybody got their first choice, but the big goal is that everybody's really confident and comfortable with the blend of cases that we did come up with. And I think that we were very successful in doing that.

So that is a key piece about stakeholder engagement for our project. Next slide, please.

So research methodology: We want to touch a little bit on this here before we go into the data collection and some of the lessons that we've learned so far through data collection, and then touching on analysis, to just share with you some of the lessons learned in developing our approach and our method.

So, from the outset, Booz Allen Hamilton came in with a very strong proposal of a qualitative and qualitative mixed-method approach. Most of us in this room who are social scientists understand the benefits of that from an evaluation perspective; it not only informs our opportunity to collect really important data to speak to OMB metrics and have really great reporting statistics, but we have a context that we can provide through particularly qualitative data collection that can situate those metrics in a really important way. And so we're doing surveys, interviews, and focus groups across respondent groups that we'll talk about in just a few minutes.

We are actually �'�' the evaluation is tasked to examine several metrics, not just the three major metrics that we've identified here. But for the purposes of this presentation, we wanted to just give you sort of a flavor of what it is we're actually looking for in terms of how we're evaluating the program.

The first is an OMB metric for public diplomacy with audiences with improved understanding of U.S. policies, society, and values. The second is for an audience initiation of positive change in their local communities due to PD efforts.

And then the third is sort of a newer priority for our Under Secretary, and it's the development of new partnerships and the sustainability of new partnerships. This program is uniquely situated to develop new partnerships. I'll speak back to being a speaker in my former life, and I went on several of these engagements.

And I remember distinctly one of the trips I took to Chile. I met a professor who was teaching at the University of Chile, and talking about my work at American. And she was so interested in the center that we sat and we talked for hours. And she's like, "I am doing �'�' I am creating the center here." And she just went with it.

And we would talk on the phone. I don't know if all of you have done a deal like Google Translator. I couldn't speak Spanish, but we had to figure out how to talk somehow. So I'd write her these e-mails and I'd use Google Translator. And then she came to Washington, and we found people who could sit down and kind of help us talk. And she started a Women in Politics Center in Chile. That is a new partnership.

I had students that would be interested in going there. And we tried to figure out how to do collaborative research around curriculum development. And those are the kinds of things that this program is unique to offer in terms of its efforts.

And then, finally, we'd like to briefly also share with you a bit about our pre-test. So we pre-tested in January. Large efforts like this, of course, need to go out and pre-test particularly their data collection instruments to ensure that we're gathering the right data and everything makes sense to everybody.

But the piece that Jason and I wanted to share with you is that we really got to test a methodology �'�' and if you could change the next slide �'�' that we thought was very exciting. And so our trip to Turkey was centered around two different approaches that we could have executed this evaluation.

One of them was following a live speaker. And so one of the approaches we could have taken in this evaluation is to just follow a speaker out. There's rich data that we could have collected through participant observation, interviewed the speaker, done pre- and post-test interviews with the audience, and really developed it around this particular event.

And we did that in Ankara. So we followed a speaker who was coming to the Ankara Bar Association. She gave a lecture, and we spoke to the people that listened to her talk. And we talked to her, and then we talked to the staff. And we collected data in Ankara around this sort of life speaker approach.

But in Istanbul we thought, we're going to test another approach because there could be limitations to just doing it all in real time. And so we worked with the PAO there to help us develop a list of people that she knew in her PD contact database had attended a speaker program in the last fiscal year, that had done it for more of a retrospective way.

And so she provided us with that list, and Jason, in working with their counterparts in Turkey, sampled from that list. And we did focus groups and tested the instruments with these respondents, who had attended an event in the past.

And what we really found was that that was a much better approach. Now, it presents some unique challenges in terms of identifying people who've received this intervention, which we're going to talk about a little bit in the next slide.

But what was really important about this approach is that it really was a much better �'�' it gave us a much better opportunity to see what happens after the speaker leaves. Particularly, our interest in finding out about partnerships and sustainability and relationships that are developed through this PD tool are just speaking to the audience right on the heels of the person's talk.

We're really not gathering any data that can give us a lot of really rich information about how that information was used, if there were any more relationships and sustained relationships that were developed out of that interaction.

And so, really, the Istanbul approach, I guess, as it's affectionately referred to, was our guiding principle as we looked ahead to core data collection. Slide.

So Jason and I, I think, are going to try and tackle this together. But we wanted to share with you, as I've said several times, this is where we're at. This is where we're going. We're going into six countries, 11 posts, across five responding groups.

Just to give you all an idea about the size of this data collection effort, we have staff here in Washington. We have staff at the posts. We have the speakers themselves. We have the local audience program participants �'�' so these are the PD contacts who have received our programmatic intervention �'�' and then our local audience control group.

And Jason, do you want to talk a little bit about our success in identifying those two groups?

MR. KEMP: I would be happy to. Thank you, Sarah.

As Sarah mentioned on the last slide, one of the unique aspects of this evaluation is really drilling down and finding the respondent group, which would be, in this case, the local audiences in the countries that we're doing this evaluation in that have been, as the appropriate �'�' I guess the way we're phrasing it �'�' has been exposed to an IIP funded speaker program.

And one of the things that we wanted to do, in looking at this, is making sure that we're doing an evaluation of IIP speaker programs because IIP speaker programs are not just the only programs that the Department of State puts forward that involve a U.S. speaker. And so we were trying to come up with ways and solutions to make sure that when we were talking to individuals, that we're actually indeed talking to them and they have indeed attended an IIP speaker program.

So one of the things that we did is we worked close with R/PPR EMU and our colleagues and their colleagues and IIP to take information that was available to us through the tracker database, which identified the name of the speaker, and also identified the topic of the speaker and the year that the speaker was in this particular country, being if it was in-country, that they were actually in the country speaking; or if they're doing a digital videoconference or if they're doing a web chat, et cetera.

And then we worked with our local partners to make sure that we instituted a very robust screening process that had several redundancies that maybe a few of them might have found a little annoying because we asked a lot of questions several times. But I think we all understand why.

And one of the things that we had to do is we had to make sure that we built in these redundancies and built in these abilities to actually confirm that the folks that are going to be participating in our focus groups, when they're speaking about a particular situation that they've been exposed to, that that was actually indeed an IIP speaker program.

Now, the flip side of this is that when we were able to identify folks that weren't participants in an IIP speaker program but had been exposed to other interactions with our government through other events, other activities, et cetera, then they became part of the control group. So it really gave us a unique opportunity to make sure that we were clearly delineating between the two groups.

One of the things that I can say as a project manager is that it sounds very simple the way we're saying it right now, but it certainly was not. We did not get it 100 percent right the very first time that we worked to do this, so it was a continuing refinement of our process to make sure that we incorporated lessons learned.

But what I can say is that one of the things that I found the most interesting is that when we were actually in Cairo, Egypt, we were sitting there and everyone was talking about the experiences they had related to the United States speaker program.

And even though it's not branded as an IIP U.S. speaker program, we were there and we knew which actually presentations and works they were referring to, and they were all IIP speaker programs. And so that was, for us, some assurance that we were �'�' that our screening process had reached the appropriate method to make sure that we were getting the right groups.

DR. BREWER: And I'll just �'�' oh, I'm sorry.

MR. KEMP: I think there was a question?

QUESTION: (Inaudible.)

MR. KEMP: The control in this instance is those who have been not exposed to other programs and aren't necessarily IIP speaker programs.

QUESTION: But they are PD contacts?

MR. KEMP: But they are our PD programs, yes.

DR. BREWER: And I'll also just say, we're almost done. And if we could just hold questions, and then we're going to open it up, because I'd love to hear from all of you.

For those of you who are the social scientists in the room, for a big piece around this screening process is that we needed it to be consistent across all the cases. And so for those of you also in State Department, people realized that in some bureaus or in some posts, there's a lot better recordkeeping than in other posts. Right? Some people may have these lists of attendees, and other posts won't have lists of attendees.

And so we needed to think of an approach that could be consistent and we could be confident that there wasn't an external actor �'�' it wasn't Sarah Brewer saying Jason Kemp came to this event. Jason Kemp said, no, I came to this thing.

And so we can be a lot more confident as social scientists that we're really putting the onus on the respondent to identify whether or not they attended the treatment, and it brings us a great deal of confidence in the consistency of our data across the cases.

So, very quickly, I think we'd like to just touch a bit on analysis, and then open it up for questions. I'm talking on and on. Jason is going to first talk a little bit about social network analysis and this very new and groundbreaking approach that their team is bringing to this evaluation.

MR. KEMP: Thank you, Sarah. And while we're talking about social scientists, I do want to note for the record that I am indeed not a social scientists. But luckily and fortunately for me, I work every day with people that are way smarter than I am. And that's where we're at here with social network analysis.

Booz Allen has invested a lot of its own resources in developing what we call a sociocultural development center. And one of the key outcomes of our sociocultural development center is looking at developing ways to utilize a social network analysis in providing research support to our clients.

Social network analysis traditionally, in the sense that we have used it in our firm, has been primarily in national security engagements, where the engagement time frame is quite longer than 22 months. And that's one of the key aspects about SNA, is that in some elements of it, it can be intrusive, and it can also be a long lead time.

So what we did is we worked with our experts at Booz Allen, in consultation with R/PPR EMU, to develop an approach to SNA that we felt could really work within this time frame of the 22-month engagement and get down to some critical questions around impact analysis.

So I think, to summarize in non-social scientist terms, what we're seeing here is the beginning of an application of SNA from our perspective that's been traditionally applied in much more longer engagements that have much more intrusive elements to it.

We're trying to be a little less intrusive. We're trying to do it within an appropriate time frame to make sure that this evaluation is meaningful to all the stakeholders that are interested in this evaluation. And we think that this is an interesting lesson for us and for applying SNA in public diplomacy evaluation is to look at the opportunities associated with it, but also address some of the constraints as well.

So that's my non-social scientist summary of this slide.

DR. BREWER: Next slide.

MR. KEMP: Sarah mentioned earlier that we are currently in the process of data collection. And one of the things that I wanted to sort of share with everyone is just looking at some of the major metrics that Sarah mentioned earlier, and looking at some of the research that we've collected to date.

So I'm not going to go through each one of these bullets. But I just wanted to show, I think it supports some of the connects that Sarah was making, particularly around going forward in a retrospective manner, and then looking at some of the key indicators around building partnerships, et cetera.

We've seen some of that through a lot of the work that we've been doing in terms of focus groups with local audiences in the countries where we've currently been collecting data. So I thought that this would be interesting, as we transition into the Q&A portion, to just throw up a couple examples of some of the information that's coming in.

DR. BREWER: Slide. And finally (inaudible) about the U.S. speakers themselves and the countries and the dates and the types of programs that they've done. And I put it into a statistical software tool to do some sort of running around and seeing preliminary analysis on what it is.

So the data that I've �'�' just a few slides I'm going to present after this is from this tracker database data. Basically, it's descriptive in nature. I'm just going to sort of tell you all sort of what is going on. The why for these data is still a question that we're in the middle of answering.

So it was a total of roughly 1300 speakers. Some of those speakers spoke at more than one program, so that's not 1300 programs. There could have been three speakers on a DVC. So the unit of analysis in this data set is the speaker themselves. Over six countries in the past thirteen fiscal years through six different program types.

Just very briefly, the target of opportunity is a way in which the speaker program uses experts who are already in country to engage foreign audiences. And so that's a particular �'�' it's a different funding source, really, than webchats, tele-press conferences.

And the Strategic Speaker Initiative, we only have a few that have popped up in that data, and the numbers that are there, I think, would require us to do a little bit more cleaning of that data and double-checking. So this is just very preliminary. Next slide, please.

So the first two slides, while very challenging to read, really are just trying to give you all a sense of volume distribution across the six countries. So here we have our six cases, and this shows you volume by year in terms of number of speakers that engaged with foreign audiences overseas during these 13 fiscal years.

If you go to the next slide, which is also very challenging to read, I understand, I've just sort of flopped it. And so here are all the fiscal years at the bottom, with all six of the cases stacked against each other.

Can you go to the previous slide? You can see, of course, that India has the greatest number of speakers engaging with their audiences vis-a-vis the other countries. I also think an interesting thing on this sort of descriptive presentation is that Egypt is getting more speakers in more recent years. And there can be important reasons why that is the case. Next slide, and then the next slide.

One of the things �'�' the first sort of maybe answer as to why India has so many more speakers is you can see, when we break it down by type of program event, whether it's traveling or DVC, we can see that India is very much programming in a lot more robust way around DVCs, which are of course cheaper, which is going to raise their number of speakers that are going to interact with their audience. Right?

And so India is actually doing more speakers through DVC than they're doing traveling, and that's not the case in any of the other one of our cases. And so through this preliminary analysis, we're starting to tease out some of these factors that of course will inform the data that we're collecting in the field and our analysis as we move forward.

Finally, the last slide and then we're going to throw it out for questions. I'm going to come back to my home, which, as I talked about earlier, I started with this program as a U.S. speaker, speaking on women's leadership. I cannot end this presentation without doing my preliminary gender analysis of how we're doing on sending women overseas for this program.

And we can see �'�' this is through absolute counts �'�' that we are programming more men as U.S. speakers and specialists than women, and the distribution is actually pretty consistent across all six cases.

I transferred these raw numbers into percentages. Just a little takeaway. The highest percentage of women engaging foreign audiences as U.S. speakers is actually in Egypt, and the lowest is in Italy. But generally, it's all around 70 percent. It's like 30 percent women/70 percent male. In Italy, 77 percent male; in Egypt, 65 percent male. So just a little food for thought.

So to sum up, we are thrilled to be here at this conference and really, really welcome you all's question at the time that's remaining. So thank you so much.

MODERATOR: For the questions, if you could use the mikes so we can catch the audio. Thanks. There's a mike over here, and then there's a mike in the back for the back row.

QUESTION: If I may, two questions. The first one's easier.

In terms of the respondents, the year of participation �'�' this might seem obvious �'�' did you find that the more recently someone attended an event, the more likely they were to respond and take part, either in the online survey or the focus groups? Basically, how hard did you �'�' how much difficulty did you have locating participants from 10, 13 years ago, or did that not matter?

MR. KEMP: That's a very interesting question. Initially, one of the thoughts that we were going to have is that there would be some difficulty there. However, once we �'�' I'll give you an example.

In Egypt, we were able to identify those individuals who could recall supporting, or going and attending what was an IIP-funded speaker event as far back as 10 years.

And then also in Turkey recently �'�' we were in Turkey, I believe, last month �'�' and we had a really nice distribution of folks that had recently participated in events over the last, I would say, three to five years. But we also had those who had been longer-term public diplomacy contacts or contacts of the post. And they recalled attending specific events over the last, I would say, 10 years or so.

And in fact, going to some of the �'�' the metrics piece we talked about earlier, it was really interesting to hear from them even though �'�' from a qualitative perspective, but go into some of the takeaways that they've had from their experiences of being associated with the U.S. embassy or consulate in that particular country.

And then that really spoke a lot to some of the outcomes and looking at sort of partnerships and understanding, et cetera. Particularly in Turkey, around rule of law and democracy and NGO development towards the end of the '90s, there seemed to be a lot of U.S. speaker topics around that. And there seemed to be a tremendous amount of recall from those folks even though it was about 7 to 10 years ago.

QUESTION: So it wasn't recency; it was salience of the topic and things like that? Okay.

If I can, are you looking at all sort of to what degree the people who participate or that are invited sort of have already, let's say, an affinity toward the U.S.? That's why they show up to begin with. And the sort of self-reported change in their involvement with an affinity toward �'�' because it seems rather natural that somebody would attend an event sponsored by the U.S. because they already had some affinity.

MR. KEMP: I will speak from not the Department of State perspective but from a Booz Allen perspective. I think that that's one of the things that we're certainly looking at, and I think it speaks to one of the interesting aspects of the study, which is that we have multiple participant groups.

So we're able to get perspectives around the contact, the database, and the outreach approach of an embassy or a consulate. And those vary across all the different folks that are �'�' embassies and consulates participating in our particular study.

And so I think one of the questions that we're looking at through this study is looking at those contacts and determining, are you talking to people that already have a strong affinity for the United States, or are you talking to people that are more in a persuadable category?

And if you're talking to youth, are you talking to youth and documenting that? Or are you talking to professors who then are serving as gatekeepers for you to go forward and talk to youth? And the question is, if you are using professors as gatekeepers, which makes very much a lot of sense, are they the only PD contact that they've become?

Are they the only one that gets immersed into your contact database and starts building up a relationship? Or is there an effort there to collect information around the students and the future leaders, et cetera? And how is that relationship managed from a particular post?

So I think it's a very interesting question. It's one of the things that we've been thinking a lot about. And I know that Sarah and Cherrika and the folks in R/PPR EMU have been thinking quite a bit about it as well. So I'll turn it over to Sarah.

DR. BREWER: Yes. I'll just say, just to follow up on Jason, that I think that's something that this �'�' it's part of the evaluation process, that we will be able to tease that out around that very question. I mean, the question that you asked we don't know the answer to. It's part of what we're looking at right now. Yes.

Other questions? Yes?

QUESTION: Yes. Thank you very much for a great presentation. I wanted to ask about how this data will be used moving forward. What's your vision for taking this data and using it to influence program design, and how do you engage your stakeholders, both in the leadership of your bureau and your program officers that are scheduling folks, to influence how the program will evolve over the next three to five years?

DR. BREWER: I think that's a great question. And Cherrika, my director, is in the back, and she also may be able to speak to this.

But a part of this effort is �'�' and our work with Booz Allen, and a big strength of theirs is really to provide usable information that can improve program performance. And so the needs assessment and the gap analysis in particular is really going to show sort of where improvement can happen.

And so there is a �'�' built into the 22-month engagement, there is a very strong component around ensuring that the findings from this research is shared not only here in Washington with the Washington staff, right, in IIP, but also with the regional bureaus and then out to sort of the posts and at the foreign service officer, PD foreign service officer level.

I mean, we are collecting data from five respondent groups, and it's critical that we use that to ensure that the program can not only perform to the metrics, but we can provide really strategic recommendations for the implementers and in terms of trying to achieve their goals.

Yes, ma'am? Would you like to �'�'

MS. MONTGOMERY: Thank you. I agree, it was a great presentation. This program has been the flagship program for the Bureau of International Information Programs for many years. And we in the Under Secretary's office have heard sort of requests from the field for us to do a real close examination of this particular program (inaudible).

But also, the Bureau of International Information Programs and the front office is quite eager to make changes, sort of turn this program from what was an historic USIA program that was there for (inaudible) Department of State, and make it a 21st century type of diplomatic effort.

And this really supports Under Secretary McHale's (inaudible) framework about how we are determined to make the Bureau of International Information Programs a little high tech, very agile sort of bureau that has programs that really focus on reaching new and emerging audiences.

So we're in close consultation with the front office for IIP. We actually do have an interim report due out on September 30th. That report will be shared with not only the Under Secretary, but directly with the front office, because this evaluation is really for, one, the Under Secretary and how she determines how we're going to do funding and the continued direction of (inaudible), but really it's for the bureau itself and how it wants to reconfigure what it does �'�' whether or not SSI and the current set of strategic things are still applicable right now, particularly in the Obama administration. Do we want to change, and how does this support the rebalancing efforts and the conversations we're having between State and DOD. So we do plan to share.

DR. BREWER: Additional questions? Yes, sir?

QUESTION: Yes. I was wondering. You have a variety of different program types. You have a speaker who actually is on the ground speaking to an audience live. You have electronic programs. You have programs where you can see the speaker and programs where you only can hear the speaker, et cetera, et cetera.

So are you actually doing six different program evaluations? Are these considered separate programs? Because, to me, they're somewhat different. Being in front of a speaker and interacting with that speaker personally (inaudible) is a different animal than doing it by a TV or telephone program.

DR. BREWER: Right. I think what's really exciting �'�' I don't think we're sort of imagining it as six different programs. Right? I mean, that's not how we're approaching it. This is one program that's being executed through a variety of formats.

The diversity of format �'�' I think you're right �'�' gives an opportunity to compare formats in terms of what kind of effect. Are there any differences in effect between bringing a body overseas, and just putting it on the internet and chatting?

But I also think that one of the things that we're learning as we go along, and an important piece of this new sort of way the program is designed, is that these formats are now not becoming really mutually exclusive.

And so you may have a traveling speaker that goes into a country that's doing DVCs and doing webchats and sort of engaging online, sort of all in one �'�' right, all in one program.

And so it's more of an opportunity to sort of answer some really critical questions on which audiences are you reaching through different formats, how the formats can be stacked together to further your message or to further the reach of the speaker. And so those are the kinds of questions that we're engaging.

So we're not really seeing sort of the DVC independent of the traveling any longer. There are important distinctions that we will be able to analyze with the data. Absolutely, we will compare those for sure, I mean, compare them all.

But I think thought to speak to sort of Cherrika's point, the front office and the bureau is really looking for how this program can operate in this new environment. How can it be sort of modern? And what we're seeing is very innovative practices in some of our case selection on where they're layering, you know, these outreach mediums on top of each other.

And so it's not the �'�' it's not exactly one or the other any longer. And that's an important piece.

MS. MONTGOMERY: To add to that �'�'

DR. BREWER: Please.

MS. MONTGOMERY: �'�' we think that what we really should give credit to is to Sarah and also the Booz Allen team. They've developed some really, I think, complex and very cutting-edge instruments to make sure that we administer either face-to-face interviews, the online surveys, even our quality (inaudible).

We're hitting on all aspects of the six sort of trees or branches of this one program, the U.S. Speakers Program. And I have to say this is one of the challenges for public diplomacy programs (inaudible) �'�'

DR. BREWER: Absolutely.

MS. MONTGOMERY: �'�' is that they're so vast. And so what may be a typical sort of market research poll or general random population sort of survey �'�' our surveys are intense, and they only take about 40 minutes to get through.

And everyone says, oh, my God, it's so long. But it's the only way that we can balance, really, the multi-faceted sort of approach this one program is doing by making sure that we are being cost-effective and we have that foreign audience, that key participant, in front of us. And we're actually making sure that we're asking the right questions pertaining to the scope of this evaluation.

DR. BREWER: That's right.

I think we have time for one more question. Yes, sir?

QUESTION: (Inaudible). Are you looking at how the audience was selected by the embassy, how the U.S. government interacted with this? The issue of language, the local language versus English �'�' there's lots of selections being made at the very beginning that didn't look at any of their control groups to see the influence of something, the subject matter of the speaker.

How critical is that? And what is the relation with the audience itself to the subject matter? Is that important? Are you identifying who are influential people within the country, whatever that means? And were those the people who were, in fact, the audience or not?

I just don't know how rich your database is that you're collecting this information to look at many other things. But what you decided not to look at is also quite important.

MR. KEMP: I think those are all critical questions. And I have a feeling that you must have been sitting in on some of our project meetings because those are a lot of great questions.

But to answer the first part of your question of are we asking these particular things around how was the �'�' how was it organized? Who was your audience? What did that look like?

To answer your question, yes. We're asking those questions across multiple instruments that are part of the overall data collection effort. And we agree that those are critically important questions, and I think it speaks to Cherrika's point about the multi-faceted approach particularly associated with this program.

So yes, those are key questions we're asking across all instruments.

DR. BREWER: And I would also like to say that I think that your sort of instinct around like who are the influencers �'�' I mean, part of the relational network analysis capability of Booz Allen is that we're really going to see some of the exciting methods in terms of applying that to PD.

Sort of what we're going to try and do is see if we can figure out who those people are around sort of who did you talk to? Who are you speaking to? Who are you sharing this information from?

So I think that in addition to the really good questions you raised around who's being invited and all of that and the ways that we're trying to tackle who the audience is, who the audience should be. Right? How do you identify folks that aren't there?

We're also really trying to tease out and get a better understanding of who the audience is and what kind of relationship they have to broader community; who it is we're really speaking to through this program, for sure. Absolutely.

MS. MONTGOMERY: Okay. Well, thank you all very much for coming �'�' is it okay if I wrap up, or �'�'

MR. KEMP: Sure. Go for it. No, go for it.

MS. MONTGOMERY: All right.

MR. KEMP: Thank you all for coming.