Monday, February 29, 2016

Adding GCP's to Pix4D Software

Adding GCP's to Pix4D Software

Pix4D is an imaging processing software and can convert large image sets into georeferenced 2D mosaics and 3D models by constructing point clouds. Pix4D has a wide range of applications including precision agriculture, mines and quarries mapping, natural resource management, emergency response, construction, archaeology and more. The software is extremely user friendly - almost to a fault as users can easily neglect to adjust coordinate systems for X, Y, and Z values.

Ground Control Points (GCP) are ground coordinates used to correctly position imagery in relation to the Earth. GCP’s offers an excellent mean to improve quality of aerial imagery acquisition and data set accuracy. GCP’s are useful tool in the field but it is crucial that they are collected in an appropriate way that yields accurate, high quality results. GCP’s are collected by using a survey grade global positioning system (GPS) to collect X, Y, and Z coordinates via triangulation. Pix4D software has the capability to use these coordinates but as mentioned before it can be easy to neglect picking an appropriate coordinate system to help reduce distortion. 

The goal of this lab is to familiarize ourselves with using GCPs to create an orthomosaic in Pix4D software. Following the introduction to Pix4D last week, we take a step further from georeferenced images and use the GCPs collected from the field to tie down the images. While Pix4D can produce images without GCPs, as shown last week, they are able to enhance the accuracy of the product. Accuracy will vary depending on type of GPS used to collect points. The Pix4D manual highlights three methods to add GCPs. Method A (Figure 1) is used when the image geolocation and the GCPs have a known coordinate system.This is the most common method and requires the least amount of time and less manual input as other methods.

                                                      Figure 1: Method A
Method B (Figure 2) is used when the initial images are without geolocation, the initial images are geolocated in a local coordinate system, or the GCPs are in a local coordinate system.

                                                    Figure 2: Method B
Method C works for any case, no matter the coordinate system of the images or GCPs. Method C (Figure 3) is the most time consuming as the GCPs require more manual intervention and is the best choice for "over night processing" (Method A and B are not).

Figure 3: Method C    
Methods

Here I will elaborate on the methods used to add GCPs and process the imagery. As mentioned before Method A was used to add GCPs because the coordinate systems were known and could be selected from Pix4Dmapper's database. Because Pix4D is user friendly the two coordinate systems do not need to be the same as Pix4D is able to convert between them. This is the most common method of adding GCPs.  It allows to mark the GCPs on the image with minimum manual intervention. But before we can do that we must first create a new project and add images as done in the previous blog. There were a total of 312 images from the flight collected by a Sony ILCE-6000 with geosnap, with a default coordinate system of WGS84 which was changed to NAD83 UTM Zone 15N to limit distortion in our small area of interest. The vertical coordinate system was set to mean sea level (MSL) egm96 to account for vertical distortion usually present with TopCon coordinate system.

Once data has been correctly added and the initial processing complete GCP's can be imported using the GCP/Manual Tie Point Manager. Make sure the GCP coordinate system is correct. The GCP's should be visible as blue X's (Figure 4). From here the GCP/Manual Tie Point manager can be opened to adjust and correct the imagery (Figure 5). At a minimum three images must be corrected, but in this project roughly 6 - 9 GCPs were corrected.
Figure 4: The blue x's indicate the location of the GCPs.
Figure 5: Here is a portion of the Manual Tie Point Manager Window
where GCPs were corrected. The more zoomed in an image is the more weight
it carries in correcting the rest of the images with this GCP.
After the GCP's have been corrected the final processing can be complete. A report will also be generated. After this the same imagery was processed without GCP's to investigate the differences.

Results

After final processing there are three important products: and RGB mosaic, a DSM, and a quality report. Figure 5 was made in ArcScene with 25x25 meter grids.

To further explore the data results the quality report was examined. Imagery processed with out GCPs  appears to be given a lesser RMS value (Figure 7) than the imagery processed with GCPs (Figure 8). This seems counterintuitive because with GCP's the mosiac image should become more accurate and have less error.





Conclusion

While the RMS error scores can be confusing - collecting GCP's and using them to tie down your imagery is one of the best techniques to improve accuracy. While GeoSnap claims high accuracy there seem to be some discrepancy in regards to z axis or vertical accuracy. In light of this though, collecting GCP's isn't always viable in some cases. GCP's can take a while to set up and collect their location. GeoSnap is useful for areas where setting up and collecting GCP's would be difficult or unnecessary. While using GCP's is the the most accurate if using surveyor grade equipment, I believe using the GeoSnsap will be good enough in some cases. 

Monday, February 15, 2016

Use of the GEMS Processing Software

Geo-location and Mosaicing Systems (GEMS) is a hardware and software package initially designed for an Unmanned Aerial Systems (UAS) precision agriculture application. For that reason, the agricultural multispectral sensor optimizes certain parameters: coverage rate, field of view, percent smear, platform altitude and velocity, image overlap for efficient mosaic-ing, frame rate, exposure times and ground sampling distance (GSD). With these parameters the hardware captures RGB, Near -Infrared (NIR), and Normalized Difference Vegetation Index (NDVI) imagery and pixel GPS coordinates in a single flight. Software automatically stores sub-images and is able to generate RGB, NIR, and NDVI mosaics and “geo-locate” images. In this assignment I will review the GEM software and hardware model and assess the quality of GEM software outputs.

Figure I: Here is the GEM workflow taken from the software manual. From 
platform to computer.

Hardware Integration Manual

The Hardware Integration Manual is used for mounting the sensor and assembling connections to the platform. It also discusses important camera parameters that could affect type of platform and flight time.  For instance, Ground Sampling Distance (GSD) (Figure II) is used to determine the scale of a photo by finding the ratio between the camera’s focal length and the plane’s altitude above ground level (AGL). For the GEMS sensor the GSD is 5.1cm at 400ft (altitude), or 2.5cm @200ft (altitude). Camera resolution is 1.3MP RGB and 1.3MP Mono this equates to 1290x1024 pixels for the dimensions of a single image.

Figure II: This image shows how focal length is affected by altitude. 
More information about Ground Sampling Distance.

Image Sensor Resolution: 1280 x 960 pixels
Sensor Dimension: 4.8 x 3.6 mm
Pixel Size: 3.75 x 3.75 μm
Horizontal Field of View: 34.622 degrees
Vertical Field of View: 26.314 degrees
Focal Length: 7.70 mm

It is stated that flying lower and slower will give you a finer resolution on the ground (smaller GSD) while flying higher and faster increases your coverage rate. So there is a natural trade off. This is important when planning a mission based on the area of interest. If there is a larger area to cover flying slower and lower with a multirotor is not recommended because the battery will drain faster. Because flying lower and slower decreases the field of view the distance for line spacing (ensuring overlap) will decrease. In the case of a large study area it is recommended to fly higher and faster with a fixed wing (maybe even a multirotor) to collect data. Your mission will also be affected by the type of sensor you use, the pixel resolution, and the GSD. These will also affect flight time, height, and speed appropriate to your study area and thus affect the type of UAV you use.

Software Integration Manual 

The software Integration Manual  is used to help navigate the software graphical interface (GUI) used to view and manage flight data. When data is downloaded from the jump drive flight data is labeled by Week (X), TOW (H-M-S). These numbers specify the instant data collection began for the specified flight.

X = week
H = hours
M = minutes
S= Seconds

This labeling system can be easily used when using PyScriptor as these numbers can never be repeated and they are easy to enter within a script. Otherwise there are online converters that can help translate the label into common time. Here are some online sources that can help convert or understand GPS time stamps: GPS Time Calculator, GPS Calander

Data is uniquely stored in BIN files that will be overlaid on satellite imagery. There are different imagery outputs once a mosaic is built: NDVI-FC1, NDVI FC2, NDVI-Mono, RGB, and RGB-Mono. The NDVI FC imagery is differ by color schema - each using a different color scheme to display the health of vegetation. The RGB imagery is used for high resolution imagery that displays greater detail then ESRI satellite imagery.

A Quick Software Run-through!

Here I will illustrate a simple software use demonstration so that you can better understand how the GEMS software is used. I will then also use the Microsoft Image Composite Editor (ICE) software and preform a quality assessment between the two mosaicked images.




Upon opening the GEMS window you are prompted to select a BIN file (flight data) you wish to use. Again, these are labeled based on the GPS time that was attached when data was first collected. First, click the "Run" tab and select "Run NDVI Initialization". This will yield imagery for NDVI-FC1, NDVI-FC2, and NDVI-Mono. These show the vegetation health using different color schemes to express Near Infrared values. Next, a select the Run" tab again and select "Generate Mosaics." For Fast Mosaic Mode simply check the first two boxes (Shown in Figure III). For a better quality mosaic be sure to select "Preform Fine Alignment" for the Fine Alignment Mosaic Mode. This will generate tiles and mosaics. 

Figure III: Here, the type of mosaic can be specified based on the "Preform Fine Alignment"
check box. Checking it will yield a better quality mosaic, but it takes more time and computing power. 


Data can be viewed within the GEMS software, however it is best to view the geotiffs (georeferenced image) within other software such as ArcMap or Pix4D where upon it can be manipulated, and geographic map elements can be added (title, author, date, data source, legend, scalebar, north arrow: see my first blog for more details). To do this, export the images under the "Tools" tab to Pix4D. These images can be found in the Tiles folder within the flight data folder.

For comparison, I also stitched the images in Microsoft Image Composite Editor (ICE) to examine the differences between the software. This software is easy to use and can be done in 4 simple steps: Import the selected, individual images (Figure IV), Stitch the images, crop the stitched image, and then export it. 

Figure IV: Above the four steps can be seen in the top center of the image. This step is highlighting the importing of images to be stitched together. 
Microsoft ICE (here is a website showing a simple task such as creating panoramas: Stitching Images

Results


There are different software that are able to mosaic or stitch imagery and each can produce a different product. Some may be better aesthetically pleasing but less accurate in terms of latitude and longitude location if they even have that data at all. Some products may produce large file sizes as well that can't always be transferred or exported - for instance my Microsoft ICE stitched images were too large. 


Figure IV: Here are maps of images I processed Fall semester, 2015. Flight data was collected over a soccer field and pavilion to detect changes in vegetation health.  

Figure V: Here is the imagery produced from the community garden imagery in Eau Claire. It is similar to the soccer field imagery except that there is more vegetation variation (Trees, garden plants, grass, dirt, pavement). 


Figure VII: Here is a stitched image produced in Microsoft ICE. This imagery is high resolution and the software does a great job stitching is together. Overall it is a very smooth image.  

Conclusion

GEMS Exports
It is apparent in the NDVI and mono imagery that there are some image overlap issues ( the side walk doesn't always line up with the base map imagery and there are blotches across the image. Regardless, the NDVI-FC1 shows ware on some of the soccer field (North of the pavillian) where most of the players run. While it can also be seen in the NDVI FC2 and Mono, it is best expressed in the NDVI FC1 color scheme. However, this scheme can seem counter intuitive as most people associate green with healthy indicators of vegetation, and red as unhealthy. Hence there are two different NDVI color schemes. 

Microsoft Ice Products
This is a freeware that I think offers an amazing high resolution product. It is easy to use and doesn't take too long - depending on the number of images you are using. However, I could not export my image as the software crashed for unknown reasons. The end result is not geo-referenced and should be used purely for image display purposes.

Data Talk
While the GEMS software is user friendly the images produced are not orthorectified as they claim. Instead the images are georeferenced. Unlike orhtorectified images or orthomosaics, georeferenced images are not tied down to the earth via X,Y, and Z (elevation) planes. An Orthorectified image has unified scale across the whole image. A DEM is required to orthorectify an image, and better yet Ground Control Points can also be used to accurately tie an image down. It is also important to note that JPEG files have no geographic information tied to them and should not be used when spatially analyzing imagery. 

Software Analysis
Overall, the GEMS software is easy to use. It was designed with precision agriculture in mind as the sensor generates NDVI imagery. You don't need a geography degree to be able to use the software. With that in mind though there are terms that GEM is using that aren't accurate and that some folks without a geography background might not understand. I am of course referring to georefernce vs. orthorectified imagery. For what the software is disgined for I think it works perfectly. However, there are other software that we will explore this semester that offer more accurate imagery.



Sources:



Monday, February 8, 2016

Constructing Maps with UAS Data

Why are proper cartographic skills essential in working with UAS data?

Properly displaying UAS data with cartographic elements will help make the data more accessible and understandable to an audience. Raw UAS data is useless to the audience. Cartographic skills allows the author to direct the audience’s attention to the main focus of the map. Also, it needs to be clear where this imagery was taken as UAS allows for the collection of data in most landscapes.

What are the fundamentals of turning either a drawing or an aerial image into a map?

A drawing or aerial imagery must contain the seven basic cartographic elements: title, author, legend, scale, north arrow, data source, and date of production. A title should be able to describe what and where your map is. The legend needs to be well organized and easy to interpret. Scale should be in the metric system unless you audience requires imperial. Data source is crucial, just like you cite sources in a paper you need to cite where your data came from. And of course it must include the author so that they receive credit as well as the date it was produced. 

What can spatial patterns of data

Spatial patterns are very important when delineating regions or objects. We use visual ques to help identify tree, vehicles, roads, grass, etc… These visual ques include texture, shade, shape, color, pattern, value.


The objectives of this lab is to lay out the basics for developing proper maps with UAS data. It is important to develop and refine cartographic skills in relation UAS data in the context of a GIS. In this lab we will be working with various UAS data and GIS software to construct cartographically please maps. 

Methods

Flash Flight Logs

For this part of the lab I opened a KMZ file in google earth and in ArcMap.



What components are missing from this map?

While google maps helps us visualize the flight this image is not useful as a map as it is missing important components. It is missing a title, a scale, legend, author, and date of production. This is not an appropriate map to use.

Advantage of viewing in Google Earth.

Flight log data in google earth displays height and the path the aircraft took. ArcMap displays a 2D version of the flight with distinguished ‘U’ shapes representing the craft turning for another row of data collection. The line cutting through these rows signifies that data collection is complete and the then returns straight to the designated landing area.

When I was done viewing the data in ArcMap I saved the flight log as a LML file that is later converted into a compatible file type for ArcGIS: .gpx. I did this using the “KML to layer tool”

Tlogs



In this part of the lab we worked on converting a Tlog (telemetry log) that stores data about the flight path into a KML. To do this I opened up Mission Planner and selected the telemetry Logs tab on the left side of the screen. From here I selected the Tlog to KML and imported the desired Tlog file for conversion. From here I used the same “KML to Layer” tool in ArcMap to make a compatible file for the GIS.

GEMs Geotiffs

For this part of the lab I added 6 raster layers into ArcMap. For this data I used the “Calaculate Statistics” tool that yields Min, Max, Mean, and Standard Deviation values for each layer based on pixel vales. After the tool is run, the resulting statistics can be found in the layers Properties.



Pix4D Data Products

 For the Pix4D imagery we also used the "Calculate Statistics" tool



What is the difference between a DSM and an Orthomosaic?
Results

Flight Logs



Flight log data in google earth displays height and the path the aircraft took. ArcMap displays a 2D version of the flight with distinguished ‘U’ shapes representing the craft turning for another row of data collection. The line cutting through these rows signifies that data collection is complete and the then returns straight to the designated landing area.

Based on the tight “U” turns the UAS takes the aeriel vehicle is likely a multirotor versus a fixed wing. A multirotor will have better maneuverability – when taking into account the size of the study area as well, a multirotor makes more sense.

Line spacing will vary based on the sensors ability to capture a larger area from its focal point. For instance some cameras have a larger degree of view and so the resulting image will contain more of the area then say that of a camera with a smaller view and is more constricted to what is below or in front of it instead of surrounding it. In order to make a mosaic the images must overlap due to distortion at the edges of an image. As a sensor rises in altitude, line spacing will increase, the lower a sensor is creates a decrease in line spacing. Again this relates to how much of the area a camera can take in as well as the amount of distortion to occur at the edges of an image. 



Geotiff


When observing the GEM RGB orthomosaic distortion can be seen around the edges of the imagery when overlain on a basemap. For instance looking up at the track (North) you can see that the images are not quite aligned. Also, this isn’t a very smooth image. The edges are sharp and at different angles. When I zoom in more in ArcMap, the basemap imagery becomes blurry where as the RGB image remains detailed. It should also be mentioned that in the basemap imagery there is no community garden. 


Figure X: This is a map of the image produced in ArcScene. Because
Arcscene doesn't allow for scale representation I created a Fishnet polyline feature
class to represent the scale in ArcMap. Each grid is 20 x 20 meters.


Conclusion

Using UAS data is a useful tool to a cartographer because it supplies a much more detailed image when mapping small areas. When processed and managed correctly you will have an accurate representation of an area. Not only does UAS sensor supply good RGB imagery there are other sensors that are sensitive to other wavelengths allowing a cartographer to use different imagery that portrays different data. For instance Using NDVI data can help with vegetation health assessments.

Of course this data does have its limitations as well. Working with any kind of imagery will cause distortion the further from the center you go. Overlapping of images helps to minimize this but it is important to note that an image will always have some amount of distortion. One way to tackle this problem is to use Ground Control Points, but we will discuss this later in the semester. Dealing with UAS data also takes a lot of computing power. Because so many images are collected and then stitched together you’ll probably be dealing with long processing times. Not to mention that that time may also be affected by the type of software you are using to process you data. For instance open source software versus proprietary software or free software. Not all data is pristine either. For instance when applying the hillshade to the DSM data the image became textured and less smooth. It infact didn’t represent what was actually happening on the surface.