Remarks
Dr. Melba Crawford
Washington, DC
June 23, 2009


Female Speaker: We are delighted to have Dr. Melba Crawford as our last of the season Jefferson Distinguished Science Lectures. Dr. Crawford received – you should sit down. This is very long.

[laughter]

Dr. Crawford received her Ph.D. in systems engineering from Ohio State University, chose the Chair of Excellence in earth observation and has academic appointments in the College of Agriculture and Engineering of Purdue University. She's currently a Purdue Provost's Fellow for Global Initiatives. You see why it's so long -- and she's an associate dean or an assistant dean? Associate dean. I thought that was a mistake.

Dr. Crawford served as a Jefferson Science Fellow at the U.S. Department of State in 2004, starting in 2004, working in IO and INR. She coordinated science sector activities within the U.S. National Commission for UNESCO -- to UNESCO and served as an advisor to the U.S. Ambassador for UNESCO. She was also a member of the U.S. delegation to the World Conference on Disaster Reduction in Kobe, Japan and participated in the U.S. Subcommittee on Disaster Reduction, an advisory committee to the White House Office of Science and Technology Policy.

In INR she developed strategic capability for the training and use of geospatial and remote sensing technologies for response to complex humanitarian emergencies. That's what happens when you try to hurry. We are only halfway through. This is a very distinguished person.

Dr. Crawford continues to serve in an advisory capacity to the U.S. Department of State. In March 2008 she was a member of the STAS Delegation to nine African countries for the 2008 Global Dialogues in Science and Technology and its program which focused on geospatial science for sustainable development in Africa.

She currently heads an advisory committee to the South African Department of Science and Technology for capacity building in space technologies and remote sensing applications. Dr. Crawford's research interests are in statistical pattern recognition, data fusion, and signal and image processing as applied to the analysis of remotely-sensed data. She's a fellow of the Institute for Electrical and Electronics Engineers, IEEE, an elected member of their Geoscience and Remote Sensing Society Administrative Board. She also serves as an associate editor of the IEEE Transactions on Geoscience and Remote Sensing. Welcome.

[applause]

Dr. Melba Crawford: Thank you, thank you very much. You know, you never know when you adopt a new university or a new university adopts you – I came to Purdue actually a couple of years ago – how these things are going to come out on the screen. It is really old gold when you look at it on my laptop, but it's pretty bright gold when you look at it up there so I hope it keeps you awake.

So, when I was asked to speak with you today, then the topics we discussed, as you know, historical; where were we when I was here? What's happened since? And Andy and I immediately homed in on the geospatial technology piece associated with disaster response and part of that was that when I was here on December 26 I got an e-mail two hours after the earthquake from the geotechnical engineers saying that this had happened. They were getting ready to put boots on the ground. What did we have with regard to satellite imagery?

Well, at that point in time, the State Department was not particularly well-positioned to respond and so I'll talk to you a little bit about what happened in the ensuing months and then some of the new technologies that have evolved and some of the better practices that we have actually been able to develop.

So, we are -- since the tsunami of 2004, we have certainly not wanted for additional disasters of various sorts, both here in the U.S. and abroad, ranging from all kinds of floods, to earthquakes, to volcanoes, et cetera. But the whole idea relative to what our community would like to do is really turn these disasters into knowledge: learn from them and be able to mitigate future impacts of future disasters and plan for improved response. So, that's the first thing we'll talk about is how far have we come since the earthquake in the Indian Ocean, tsunami, in 2004.

And I will then follow that with some discussion of some of the technologies and finish up with some slides that have been shared with me by my colleagues who have been working on the Chinese earthquake and some of them you may have seen in some of the news, but others I think you will not have. And I'll finish up with some of the challenges.

This is not likely to be one of the slides that you saw associated with the tsunami, but it's a very important one and it is truly the picture of the perfect storm. I don't have a pointer here but let me just point out a couple of things. This is from a radar altimeter which actually is used to measure the heights of the ocean and there are three pictures here. There are two and one on the following slide. What you see in green coming from north to south is a satellite track. It's a descending track, so as we think of it, all right, it's moving north to south so as time goes it's going to be moving latitudes higher to lower latitudes. So we look to the left and what we have is this track. Now you see that you've got three different -- four different passes of this particular satellite. That one happens to be TOPEX/Poseidon, which is a U.S.-French mission, and you see they all are kind of going along with a little bit of a wave action except for the very large red one. If we were to map the latitude from north to south and correspondingly look at the imagery to the left, then what you see is that very large bump and the trough before it are associated with the outside ring of the tsunami.

As we move forward -- now this is one hour and 53 minutes after the tsunami began after the earthquake. Two hours later you have another satellite coming up, and that's what I say is the perfect situation here in the sense that this would never happen, to be timed right over such an incident. So, what you see is you see some additional, you know, the patterns are becoming more complex as it moves north to south, and then the third satellite [unintelligible] comes up at three hours and you see how much it has expanded and you see how much greater the waves are, both in the height and in the depth of the trough.

So, immediately what happened was around the world all government agencies responded and within the U.S. what you had was a range of agencies, including the State Department, the private sector -- and this was the sector that I was most, as a university professor, used to dealing with, both here in this country and abroad with the French with Spotimage, and they were engaged. "What do we do?" Well, all they knew what to do was to go full out and acquire data. Now, data does not transfer necessarily into information, as most of us know. It doesn't even always transfer into good data because, as it turned out, that was pretty cloudy during those times. Universities wanted to engage, and then of course there were a range of other groups around the world.

In particular, I was – when the image data were not forthcoming, at least for baseline data, I was contacted by Earth Satellite Corporation who had these data on a drive. They said, "What can we do?" I met them in the parking lot. I brought the drive home. I start uploading imagery to an FTP site at the University of Texas, which downloaded it, gridded it, created overlays, and sent it to Oregon where the team departed for Asia. It was, you know, a first start and certainly not as high a tech operation as you might have hoped or could have been possible. In the ensuing months, then, there was a lot of data acquired and analysis conducted. This is the kind of thing that came out of it; I'd say sort of a situation sort of a map that described, you know, where the shoreline was, the extent, et cetera. This would be for after an incident analysis and end discussion, but certainly not useful for response and probably not for recovery.

The HIU actually was engaged and their goal was, at that particular time, was to take what information they had and their -- the group that they were addressing was really the diplomatic community, kind of some general information. So, after that several of us wrote an article. John Kelmelis, who many of you know, Dennis, and I were all parts of this and some of the things that came out of it was -- were that, in terms of the remotely-sensed data, which was my particular expertise, volumes does not necessarily translate into more information and a lot of times the content, in terms of the acquisitions, was not need-driven in terms of field observation or well-matched in terms of the particular type of data to the problem.

In this particular case, then, we found that a lot of pre-event data were not available, and that was a real problem because much of the time you want to do change detection on your imagery. Timeliness is difficult. It's not just that the satellite can, you know, see every point on the earth every three days and -- you know, for some of the high-res satellites. It's a matter of them being able to see it in areas that are not obscured by clouds, being able to download it, being able to process it, and then disseminate it. And that last part, the big "D,” distribution, is one of the biggest problems.

Once you have the data, then, you needed to utilize it in conjunction with other geospatial products. They were not generally available. Cadastral mapping was almost nonexistent in some of the locations. There was a lack of standardization between the organizations that were trying to work on this problem. In terms of people going to the field, we had not really done a lot with regard to training, training for collection of field data, and people had generally a lack of sense and a lack of understanding and awareness, if you will, let alone experience with GIS and imagery, and the last thing that we –so all of these things were inadequacies that we needed to work on and one of the things that is the most critical is that networking is very important: inter-agency, universities, commercial vendors, et cetera. One of the things about universities that I think is the best and worst of us is that when these things happen, we are not constrained. We can be -- therefore, become some of the biggest problems ever, but my friends, you know, are typically on a plane immediately because they've connected with their friends at other universities and on-site. For the geotechnical community, their goal was to be there before the bulldozers. They want to stay out of the way of the first responders, stay out of the way with regard to humanitarian issues with rescue, but they want to be there to do the geotechnical assessment associated with structures before the bulldozers get there.

Okay, this is a little bit of a cartoon that came out of – so, it's a better one than we had, but conveys the same information as our paper did relative to the geospatial technologies. This is a whole cycle which starts actually before the event, so the baseline data have to come before the event. You have to prepare people, educate them. After the event, you have, you know, all levels of response and infrastructure reconstruction and then you have to think for the next one.

So, where do we contribute in terms of our community? In terms of the pre-event, the potential impact is very major. In terms of response, the immediate response, if you have – if you are flying aircraft imagery and you are looking specifically at bridges or you are looking for very targeted areas then, yes, that -- it can be very useful. But it's not typically part of our geospatial community when it's acquired in that mode because we do more processing and typically deal with more than the picture. But that's the kind of data that are the most important in the immediate response. After that in the near period, then, perhaps moderate recovery: very significant and with regard to the future, very significant. So, we think we do have a place in this cycle.

Most of the U.S. government activity historically though, and particularly at the tsunami point in time, was really focused on response. So, you see that we really had much more that we could have contributed had, first of all, the data been available prior to the tsunami and then in terms of having our act together in the post-tsunami era.

So, coming out of that, there have been some advances. One of the things – some of them had nothing to do with the tsunami but actually -- and some were ongoing, for example, the Geos activities. The U.N. had a number of activities that were focused on disaster response, et cetera, but there was an increased awareness just due to the massiveness of this particular disaster and I think that it really moved things forward more quickly. One of the things that was particularly important for the international community, relative to remote sensing, was the activation of the International Charter. And when that happens now, then data are acquired and can actually be delivered. Much of it is commercially focused types of high-resolution data which is very, very helpful potentially but, on the other hand, we found that triggering this acquisition and delivering the data, as I'll talk later from a recent experience, is not always the answer fully either.

There have been infrastructure upgrades and development of early warning systems, most of which haven't had anything to do really with the geospatial community, although it's a very important component. And the public domain, the public, has actually become much more aware and, again, it's not just due to this tsunami, but how many times a day do you log on and CNN has a map up there, whether it's the weather or whether it's Tehran, there are maps and people have become just much more geospatially conscious, partially due to our friends at Google Earth. And there are new technologies, both in terms of acquisition technologies and in terms of platforms for delivering data, although they are still somewhat limited and finally, some new remote sensing systems.

So, the second thing I'd like to share with you is a little bit about some of these technologies and I'm pretty sure that most of you do not deal with some of these on a daily basis. The first which is optical imagery, of course you do. It ranges everything from traditional photography to imagery that has acquired in, as we say, multiple bands, but the other technologies that I'll talk about in terms of radar where the U.S. does not play a major role internationally, and LIDAR which has potential, both on the ground and in space, as well as airborne, you know, are new operators in the disaster response arena.

GPS and GIS, there have been some advances, but most of it’s really in terms of on the ground types of things and really the answer is to how can you deal with this best is if you can integrate the information, both real-time and after the experience. As I said, I'll talk a little bit about the earthquake.

Okay, so, here are the three kinds of data that I will be talking about: optical imagery. Now, we're moving forward and, in terms of the civilian sector, and the U.S. definitely is the leader, but it is in the commercial arena. With optical imagery we can get resolutions, you know, sub-meter easily and these are really good for damage assessment, particularly with regard to infrastructure in urban areas. But other kinds of data, as we'll see when you have massive kinds of disasters such as the earthquake in China, really cannot solve the whole problem.

SAR, which I show here in this second image, and what you see here in terms of color is – this is a topographic map that actually is in terms of change, and this is actually over California. The data range anywhere from 30 meter now to as good as 1 meter in one of the European satellites, and what it can be used for is not just single imagery, but the kind of change detection you do with these types of systems is referred to as interferometry. And it works, and I'll talk a little bit about it later, on the concept that if you have an active -- if you send out an electromagnetic wave, and you get it back at the sensor, and if you had another antenna, and you send it out and the only difference between the scattering from the first and the second -- from the scatter that, you know, back to the first antenna and the second antenna should be the phase information. So those of you that are physicists know about that, and that can actually be transformed into topography and change. So, this, if you look at the upper right-hand corner, is really in terms of centimeters and it ranges from minus one to seven centimeters here in terms of vertical change in this area that's gone on over time.

LIDAR: what I show here as a ground-based LIDAR has been the new person -- the new instrument on the block that is actually being carried in almost as easily as GPS units these days.

So, as we look at this range of sensors that we have, then what we have up here in the upper left-hand corner is what we can see, and this is the range of wavelengths you see on the scale here. The traditional photography occurs, you know, in the range of the human eye, whereas multispectral scanners, and that's where those acronyms “QuickBird,” “Landsat,” “SPOT,” et cetera, operate. It's the visible near-IR and into the shortwave infrared. You can even have thermal scanners for temperature in this range, so if we look down here what we see is the -- are sensors that have actually, you know, the platforms have been acquiring for some number of years. You’ve got -- this is Landsat-7, but most of the Landsats are in this same range -- SPOT IKONOS [spelled phonetically], these are the higher resolution ones. You see that as you go to higher resolution, you go to fewer bands so you get less spectral information, but as you come out in wavelength and we make a major jump here from the visible near-IR and thermal app to the microwave where you're out in the range anywhere from a few centimeters typically to a meter. Then what you've got is active and passive responses that are going to be used for soil moisture, and actually because they can see through the clouds, will get around that other difficulty we have down here. And the final one we have up here are the LIDAR. Now LIDAR depends on what frequency you want to operate, design and operate at, but most of them are in anywhere from the blue-green to the near-infrared range.

Okay, oops [spelled phonetically]. I forgot we had a -- so, we have a number of these satellites. Now, what all of these are are international satellites. The U.S. was -- put up the first SAR satellite but has, except for the shuttle, has not had an operational space-based satellite since then. It is, I will say though, one of the high priorities for -- on the Decadal Survey for NASA. But we have the Europeans that are flying VSAT [spelled phonetically]. We fly Terras RX [spelled phonetically], which is a high-resolution one. We actually have the ALOS/PALSAR one for the Japanese and then another X-band one has been put up: COSMO-SkyMed.

There is no single solution. As we look at this range, what you have in terms of appropriateness in terms of spatial scale or aerial extent maps pretty closely to the different sensors. So, you have the optical, the very large-scale optical in SAR sensors. Then you have medium-scale, medium resolution ones, high-resolutions; all these overlap in each of these arenas so it gives you the perspective that probably they could be used to synergistically exploit their mutual strengths. LIDAR is always used over smaller regions, partially because it's currently flown as, in terms of a scanning system, as an airborne platform, and then of course the terrestrial stuff is for doing individual bridges, buildings, landslides, et cetera.

Now we have been engaged in this business for a very long time. If we look back to 1972, we started with our first Landsat satellites and we had a plan. They went up every few years. Well, Landsat-4 went up, Landsat-5, 1984, Landsat-6 went into the drink [spelled phonetically], Landsat-7 went up in 1999 and it’s limping along. We hope there will be a Landsat follow-on but hopefully with improved technology.

Now, what you see is really the output of a lot of these sensors, but there's much more that goes into it and sometimes I wonder. I wonder with amazement that we get anything out of it at all because if you look at this from the point of view of reflected energy -- let's think in terms of the passive optical sensors -- then you have energy from the sun, okay, that comes down through the atmosphere and it's either going to be absorbed or reflected. You can also have admitted radiance [spelled phonetically]. Now while it's coming down through the atmosphere, what you see is this would be the son, solar radiance. Okay, this is the atmospheric transmission windows, so you see that in many parts, you don't get any transmission at all. So then that gets mapped into a path radiance. You get a surface reflectance that's going to be characteristic of a particular target. It goes back up through the atmosphere again to the sensor, all right? Then you have a data downlink and there's noise associated with that. You have your measured signal, which then needs to be calibrated. You have some sort of an atmospheric correction that will be run based on some models. You may have the atmosphere. Finally you have reflectance and you're ready to start analyzing the data. A lot can go wrong in all of these processes, so it truly is amazing that we have the quality of data that we do.

Now, in terms of the optical sensing, so I said that we'll talk about three kinds of data types and their application. What you have on the left is just an example of multispectral data. It's high-resolution and the ranges that it operates in typically often are the blue, the green, the red, which would give you your true color data and the near-infrared.

And these are really good data sets for doing baseline mapping. The same kinds of spectral information and more are included in larger-scale data sets such as Landsat. These are going to be on the order of, say, 10 to 20 kilometers in terms of their spatial extent. Landsat is 180 by 180, so you get much greater coverage. So that is worth something. My particular area of expertise is really in the hyperspectral arena, which is used for materials analysis. It's used for atmospheric chemistry and it's used for very high-resolution spectral mapping of, say, species but that's not really a type of data that really has a lot to offer, at least in the short term, relative to disaster response. So, I won't talk much about it.

Okay, so, if we had – my first experience in doing mapping was back in 2003 with high-resolution data and this gives you a perspective on what can be done in addition to, say, situation maps. So, this is just a grayscale image and let's take a couple of areas -- you see them over on the right and so they are blown up to full resolution. So this is 100 meter and the same over there. You see a couple of residential areas. This is after the earthquake. So, you see that the crumbling of the buildings, you see pancaking of some. There is a change, and there is some work that can be done in terms of both visual interpretation and in terms of automated analysis to actually estimate the extent to which -- and the degree and intensity of the impact and the damage.

Okay, now, when we come to – those kind of data are, as I said, really good over urban areas, but if we were to look at some of the larger-scale events -- we've got the Niigata earthquake over here -- what we have is mapped the extent of coverage that -- of data that were acquired during that event at high-resolution so you see the little postage stamp images superimposed and, similarly, you see them over this massive area for the Wenchuan Chinese earthquake. And we'll never be able to get the full picture with these data. I'll come back when I talk about the range in data types that were used for the earthquake in China, and I'll talk about what was done and some of the analysis.

Okay, in terms of a laser, the way lasers operate is really kind of fun, so if you have this aircraft -- it's a very simple instrument. It's all about knowing where you are and knowing – knowing where you are and where you're looking. So, this little aircraft, and this actually is one we had at Texas, flies along. The laser zigzags, scans across track to the flight direction and if you know where you are and you know the speed of light, you know how far it is and how long it took to reflect, then you know how far it is from the aircraft so you know what the topography is -- very simple technology, and it's becoming very popular in disaster response, really important for baseline mapping in urban areas and really important for post-disaster response if you can get it. But there's a big issue associated with it: availability and cost.

To give you a feel for what can be accomplished, however, many of you know about the topography, the National Elevation Data Set. And I came from Texas, so I have these data from Texas. This would be the National Elevation Data set at 30 meter that has been around for well over a decade. The SRTM, Shuttle Radar Topography Mission, then provided improved, even at the same resolution, improved topography. This is what you get with a laser, and if you were to decide that you wanted to know what's under the trees, you can post-process it and this is what would be the result. So, this is a very powerful capability for doing mapping and I'm sure you’ll see more of it in the future.

For another application, this is on the California coast where there's very active coastline, and what you see in this image is this coastline where you'll see the vertical. Look at these two areas in particular and you'd have your pre- and post -- let me go back a couple here. Here's your pre-image. Here's the post-image. You can begin to see the sand piling up on the beach and then to look at the change what you see is the red and the blue showing areas, and we'll see it here, where it's been both accreted and then it's accumulated. So, if we look at the dark brown, you'll see April and September you see the red and you see that you literally had the top of that cliff falling down and accumulating at the bottom. This is for a lot of the areas that are in the coastal zone, a technology that, as I said, is very important.

Now, it's not only flown in space, but now people are taking them into the field, and of course this is important that they can miniaturize this technology so that you can get into more remote areas. Graduate students aren't getting any bigger with regard to carrying equipment so you have to deal with it in another way. So, these are a couple of pictures of scanners. And here's one set up actually and they have been used in all of these recent disasters and the whole process is you set it up and you scan, you register your data to some map, all right, you fuse it all together, and then you quantify your deformation, and this is something that the USGS has become a major leader in.

The SAR technology I already led into talking about it in the sense that it's an active system, and it's different in terms of the way the data are acquired from optical imagery, which is really acquired at a downward-looking mode and scanning oftentimes. SAR data are acquired to the side. It's side-looking, and the reason it has the word "synthetic" associated with it is nothing is really synthetic, per se, except that in order to get resolution that's adequate, you'd have to have an enormous antenna. So, you can't fly that in space or on an aircraft, so in order to compensate then what you do is you take into account the forward motion of the aircraft. You look to the side and, therefore, you create, synthetically, a very large antenna.

The resolution from these systems, in terms of spaceborne international systems, is usually from 10 to 90 meters. They are not doing this interferometry that I have referred to here with the really high-resolution systems. The most famous mission of all though – you know, when we do it, we do it well. We just don't do SAR often, is from the shuttle where they had this amazing technology that had antennas on board and out on a mast and they mapped the entire earth in about 11 days, and it's the only integrated topographic map for all of its issues that has ever been created of the entire world except for the North and the South Pole, due to the shuttle limitations.

Now, the other thing that SAR can be used for in addition to topography is mapping in a traditional kind of way. It's not as rich in its spectral information, but in a traditional kind of way for -- in cloud-covered areas. Many, many times these floods are occurring in arenas which are cloud-covered. This is the Myanmar flooding here, and you can see by taking – you usually only have one band of SAR, as opposed to multiple bands in the optical data, but by taking multiple data sets and stacking them so from different times, or stacking two from identical images and bringing a third one in for the display, and you can highlight change that has actually occurred. And the Japanese have been tremendously successful. This is a longer wavelength SAR system than the Europeans are flying and it is particularly advantageous when you have vegetated areas which many of these places have. And actually, it's not as subject to scattering in terms of other smaller kinds of scattering objects.

So, this one I've already shown you here on the left, but I wanted to show relative to the interferometry a sequence here, if we were to look at 1997, and this is from a European satellite during a period of time that they were doing interferometry. What you see on the scale here, the color scale, is the range in terms of the change and its millimeters per year. This .19 meter and .34 meter, et cetera, has to do with the rainfall and so they were trying to bring in the accumulated rain into these pictures. As my friend said, this is one of those that the author included, a very long set of information in the description, but not in the imagery so, anyway, so, don't get too concerned about the .19 meter except that has to do with the amount of precipitation that then would relate to potential for change. And so these then map in this particular area into potential for landslides.

Now, as I said, a lot of the value comes from pulling the data sets together, having them geo-referenced from multiple sources. It can be anything from historical data: population and socioeconomic data are often very important, topography, field observations, and they must always be acquired with every event, in situ photographs which now are becoming a piece, a common piece of the field campaign, and imagery and, finally, models because the imagery and the other in situ information then are integrated into models that you can actually do predictions and you can actually sometimes ascertain in terms of, say, buildings: what future standard should be.

So, finally, I wanted to add here that all of this is coming together now on platforms such as Google Earth, but they have been really at the forefront partially because it's free, it's everywhere and it's easy, and so you can do your analysis. You can do sophisticated or simplistic kind of integration, but this has made major – helped us all make major steps forward.

Okay, I'm going to show you then some examples of that where we have – this is actually the most recent earthquake in Italy and so, with Google Earth, then people were able to bring up simple imagery. It is not well-registered, okay. This is not a precision product. This is a picture, but what you are doing is overlaying your data and it communicates information very quickly if you're on the ground and you're doing this and you are communicating in over – you can actually do some of this over your cell phone now. Then the person who is processing can look at it at night and say, "Oh, we have a gap. We need to go there. Oh, I see there are some other issues." This sort of simplistic kind of thing is very, very important.

Okay, now, I'll finish up today then talking about the earthquake in China and some of the continuing challenges.

Okay, as most of you know, that occurred in May a year ago and, depending on how you are referring to the location, it's either Wenchuan or Sichuan and it was a very large magnitude earthquake. The aftershocks are posted here, just amazing. Now, what you have -- and you saw a picture of this earlier with the high-resolution superimposed, you can see that this is on the scale of 50 kilometers. You could never cover this area with high-resolution data. This whole area is the affected area and it's larger than 10,000 square kilometers. It is also in an area that was not readily accessible to outsiders and that created some significant issues. Because it is severely mountainous, there were significant landslides and potential for more over the next several months.

As people engaged in doing reconnaissance then we had learned from the past. There was a multi-agency NGO approach, but the most effective one in this particular arena, given the location, was that -- individual to individual and then groups could coalesce around that.

People who went into the field were not allowed to do aerial reconnaissance, as you are in many cases. If you were not Chinese, you were not allowed to carry a GPS. You were not – even referring to the word reconnaissance raised some sensitivity. LIDAR was not allowed initially, at least not by the outsiders who were doing, and had become used to doing, ground-based LIDAR in Chile and Peru and in Italy. When they traveled then they traveled in large groups, which is not particularly efficient, and the sites that they visited were predetermined by their hosts. Some of that is relaxing now and so they had to do a lot of use of drive-by photography. One of the technologies that has actually improved since the early ’90s has been the GPS-enabled cameras and so now we can integrate our cameras, we can integrate our kinematic surveys and the imagery very rapidly and we are used to working long days and that wasn't allowed either. But given that, a number of areas were covered, and you can see some of the surveys that are outlined and so they were able to cover, in this arena, a significant amount of ground, a similar acquisition in another major part -- location for the earthquake.

The kind of data that were used primarily -- we already said that LIDAR was not available either on the ground or airborne. With Landsat -- now Landsat, as I said, has very large extent even though it's 30 meter. Because we've been flying it so long, there is some pre-event imagery that was clear and actually they had it both from 2007 and 2008, but the post-event imagery was more difficult and I'll show you some examples of that. So it limited it. The high-resolution station data, depending on who you were, you could get it in different ways, but both IKONOS and QuickBird were acquired and provided for some targeted studies and the Chinese began with fly airborne multispectral cameras. And finally, I would say in this particular case, the champions were really the Japanese. Now, of course, they had a satellite that had the best, in terms of their particular orbits and their particular acquisitions, they were the best-matched as well but it's, again, as I said, this L-band SAR system.

Now, the Landsat imagery -- if we look to the left, here's a pre-event image and, again, you see superimposed the extended area, the more highly affected area, and then an area where some significant amount of groundwork was done, but in terms of the post event, this whitewashed out is exactly what you think it is; it's clouds, so there was still limited coverage of much of the arena.

Here are some pictures taken from some of those drive-bys. You can see that this had been deposited in terms of landslides in a lake and other kinds of small and large landslides occurred blocking the road, et cetera. If you were to take one of those QuickBird scenes and blow it up, it is totally amazing. The scour associated with the landslides is just incredible.

If we were to say, "What could we get both from our pre- and post-Landsat versus our high-resolution IKONOS data?" just to give you a perspective, then what you have is your pre-event. You see so much more white in here. Now one of the things you have to take into account is that the spectral information, the brightness, the darkness, does not tell the whole story, as we'll see in the next slide, but you see there has been major change. If we were to look at the IKONOS, which is higher resolution, then what you see is much more detail but you see essentially the same patterns for this small area. If we do analysis -- now this is the next thing that the science community can actually bring to the table, is analysis of that information: pattern recognition, kinds of methodologies can be used and used in conjunction with ancillary data. So, if we were to do the analysis of the Landsat data which is 30 meter -- and this is just a small area, and ascertain where the mapping of the scars from the landslides are and then we were to look at the IKONOS data and do – this happens to be visual interpretation of it, but a map that was created from analysis was almost as good. The spectral information is not comparable between the two. You see that you really didn't have to have the high-resolution data to actually get valuable information even about localized landslides.

If you take into account slope, in addition, then what you would have would be information that could actually help you better identify landslides. You know, you might have agricultural areas where the ground is bare, but it's flat so that's not a landslide, but spectrally, it may look the same. So, in a landslide area if we look to the -- we've got the red and the blue, and these are three different areas: area one, the west, and this third area, then what you see is relative to the slope, this would be the percentage of the area that -- we've got the slope along the bottom and the percents are cumulative, the percent of the area at these various slopes. Then it turns out it maps directly into information that can be used to discriminate actually the landslides from other kinds of similar spectral features. So, here the geology actually – the few physical characteristics came into account.

Finally, I'd like to show you some of the interferometry results from the PALSAR data. What we have here on the left is the ALOS/PALSAR, this yellowy-looking thing. One of the things that you do with radar imagery when you're doing this interferometry is you correlate, okay, the two acquisitions that are either from different satellites or from the two antennas, and if they are highly correlated there's not much change. And so what you see here is you get good coherence -- excuse me -- first thing, two images you [unintelligible] highly correlated so you get actually good information out. If they are too de-correlated, there is nothing there, but then these areas where there is more difference in the coherence, then you would see some change and that's along these areas where there are actually landslides.

Now, at if we look at the left – excuse me, at the right, what we see mapped here is really the displacement. If we look down here, it goes from minus five to five and that is actually in meters and so the displacement in the direction of the satellite, if you go to the pinks and the blues, then it's going to be positive, okay? If you go to the yellows and the oranges, it's going to be negative. And so you see that it's both negative and positive up to a meter or two in terms of both directions.

This is – I love this image because it corresponds to the previous one in the sense of what you get out of the imagery. The form of the information comes in what's called this interferogram which is a lovely form of art actually, it appears, but as you go through these cycles, if you will, associated with every cycle will be associated with a continuum of change in deformation, then, and you count the cycles and you know how much change there is in every one of the cycles. Then you can actually map that to the full deformation and that's what's appearing here on the right, which is what you saw in the previous slide. It's called differential in-SAR [spelled phonetically].

So, we've shown you some technology that is now being used in subsequent disasters to the 2004 tsunami. Let's kind of look at the big picture.

In terms of, I would call it the geospatial awareness, it's really dramatically increased and that has been because it's been continuously in front of people so when these issues occur, when these incidents happen, then people are already prepared, and I think that is one of the major issues relative to a disaster. You already understand the technology.

We have had major advances: advances in GPS. I mean, there are GPSs on your camera, GPSs on your cell phone, and digital cameras are miniaturized, better lenses. Everything is getting better and, finally, the cell phone technology, and that is actually the place where I would say right now the most change is occurring. I know that in Korea they are making major advances for there -- and it's being paid for by their cell phone companies to actually send more information, and of course entertainment is a piece of it, but it can also be used for disaster response and other kinds of data transmissions.

SAR sensing has moved forward and even though we don't personally have a sensor, a SAR satellite up, then we can utilize the data that others acquire and certainly they provide useful information to the international community.

The LIDAR acquisitions are airborne at this point in time. We, the U.S., have flown one spaceborne LIDAR system. It's actually focused on ice, but that system has failed and it's also one of the future systems in space that is actually being looked at very carefully. It's probably the number three mission for NASA right now.

We've had advances with regard to wireless-enabled communication and that certainly goes along with our earlier mentioned technology.

Computational platforms: Google is one thing, but Google will not be able to handle the large geospatial data sets that -- and this was one of the biggest problems we had in 2004 and we still have it. When there's a disaster the data come in, say, to the Pacific Disaster Center, and you have to first of all -- the information is not intuitive in the way it is delivered -- USGS, PDC both still use and rely and awful lot on flat files, so it's really just a filing cabinet which you use to identify and download the data. So, virtual platforms, hub technologies, those are improving but there's a lot more to be done.

Finally, we can never leave out the training. You need to train people, you need to retrain people, and you need to train them again. They need to be trained in the technology, they need to be trained in safety, which has turned out to be a major problem in the field -- all of the kind of capabilities associated with analyzing the data, overlaying the data, and interpreting the data. But that needs to be an ongoing process, but it's very easy. You know, these kind of training workshops can occur at lots of different conferences and it only takes, given the technology now, probably an afternoon for people to be trained adequately and to maintain their capability from year to year so that when the teams are needed to respond, then they are ready.

One of the things that has also occurs then, improved coordination between the field teams and their remote sensing teams -- part of us are now speaking the same language. We have reduced the acquisition time and interpretation time for remotely-sensed data but still, it's not adequate. In terms of our final challenges that we continue to need to deal with, we don't have common operating procedures, even though many times we are within the same community and we need to work on that. We have goals that are on teams that are often differing in terms of the science community and, say, the response community.

Another thing that I alluded to earlier, and I think that it's something that perhaps the State Department could always play an active role in is we really underutilize the data that's provided by the International Charter because it's there, but it's often provided to the people that don't have the capability to analyze it and I can tell you that for sure, that in Indiana when we had the floods one year ago the data were delivered to Indianapolis but those of us at Purdue who had the capability to analyze it couldn't do it. We couldn't have access to it, so we had to have someone trained from the University and all the issues associated with the International Charter. So, trying to facilitate that would be an important contribution.

As we worked in these remote areas, remote-sensing is valuable because it acquires data over remote areas. Many of these areas either continuously or at least during the disaster have limited access to Internet. Now certainly there's a lot more -- there's a lot of improvement in the satellite technology in terms of communications but we're still not there and that continues to be an issue.

And, finally, in terms of diplomacy and in terms of communicating our results to decision-makers, I would say we as scientists often have limited skills and limited experience in this arena. Many of these important results end up in journal papers, end up on the Internet, but may not have been communicated in a timely or effective way to actually impact policy. So that's another thing that I think we really all need to continue to work on. Thank you.

[applause]

Female Speaker: We can take some questions but please go to the microphones and please identify yourself.

Fernando Itaria: I have no competition, so I'm going to take advantage of the opportunity. My name is Fernando Itaria with the Bureau of Oceans, Environment, and Science. It's great to see you again, Melba. Welcome to the department, and thank you for the great presentation.

The role of geospatial technologies and what you called international diplomacy has radically changed over the last 10 years since I've been here, for instance. A natural disaster 10 years ago, the bulk of the international geospatial response was primarily U.S.-based. Today, that's no longer the case. You've done some excellent summary of an area where the U.S. is behind in synthetic and radar products. So, my question to you is, put on your policy hat and what is -- where is the sweet spot that the U.S. government should focus on? I just want to point out also that on page nine of the GDAS report that you and I contributed: conclusion number four as we highlight how the role and the presence of the U.S. in sub-Saharan Africa has also changed dramatically. We are not as visible anymore, so my question to you is where do you think we have -- we should focus our efforts, our limited efforts? We, the USG [spelled phonetically].

Dr. Melba Crawford: The thing that we do best, and I think there is no one that would question this is, we are the best, in spite of our complaints, as a government with regard to remote-sensing and geospatial products relative -- we're best in our delivery, our platforms for delivery, and I think that's where we really need to focus right now. I think -- the other -- there's a great need and interest in high-resolution data. Nobody delivers the -- also the quality of the high-resolution optical data that we do, but it's commercial, so there is a hurdle there. Other countries are catching up and they are not going to be commercial systems, so that's what is going to be.

But I think that we have a major – we still have a major contribution in terms of dissemination and in terms of enabling technologies for that. I think that on the diplomatic side, if we can take a role, a major leading role, in utilization of those technologies in our development of policy, in our negotiations then by example, it's available in other countries. They will as well, and this is only one arena, the disaster response arena. I think geospatial technologies play an even greater potential role in the non-disaster environment. You know, it's not just utilized during a disaster but in terms of everyday climate change in terms of food security, in terms of developing nations, and I picked up from Lee a recent Xerox from the state magazine where they were talking about delivery of data, satellite data in Sri Lanka. Really just in terms of situational awareness, I think that – and emphasizing that: keeping the bar high and being a good example I think is a good thing.

Andy Reynolds: Melba, Andy Reynolds from the STAS office. Nice to have you back. You had -- I'm sorry to say this goes back to emergency response, but in your hierarchy pre-event you said major contribution, recovery, major contribution but modest in the response phase. There are many participants here today, many of our guests who work in stabilization and reconstruction, disaster relief, the humanitarian Issues Unit is well-represented as you acknowledged. I wonder if you could tell us where you see the major requirements or the needs to meet the response phase of a disaster, where we are lacking and what more can be done?

Dr. Melba Crawford: I think a lot of the problem is due to -- in terms of response, part of it is a natural limitation. It's the time, a timing issue, relative to -- you don't -- unless it's a hurricane which you can often observe several days in advance in terms of the potential and prepare, then many times you are not going to be positioned for immediate response, so the kinds of technology -- I would say the least advanced technologies are currently the most useful in terms of immediate response: search and rescue kinds of things. You need high-resolution, as I said, pictures of bridges, inundated areas, but even at the best of times you're going to have it be a few hours typically until you can get satellite imagery and get it processed in a meaningful way. So, in terms of what you can do, subject still to the limitation, we would love to have constellations of satellites so you can certainly try to overcome some of that time issue, but really in terms of the processing, there will always be a bit of a lag time and I think that wherever you can improve that and actually also continue to help inform in the non-disaster times of the value of geospatial products and get people using them, then they will become more reliant on them and -- because those are times when you don't want to learn anything new, and it's got to be a part of your daily regime if you're going to actually utilize it.

Female Speaker: If there aren't any other questions [inaudible].

Dr. Melba Crawford: Thank you.