Goals
and Objectives:
In the Context of this lab project for
Geography 337, students will be learning how to properly download and use data
gathered from several online platforms by a variety of organizations. The
purpose of learning proper data management is to make sure students are comfortable
with seeing and using multiple different data formats and download platforms.
Previous to this lab, students have been given “canned” data from Esri services
which is already perfectly organized and groomed for easy use in assignments.
In the real world of data, clean and perfectly groomed data is never a sure
thing. In order to prepare for future jobs this project aims to further
familiarize students with proper techniques in downloading and using data in a
geographic context of mining in Wisconsin for sand used primarily in
hydrofracture mining conducted across the country for natural gas and oil used to
produce energy.
General Methods Used:
To get
started with this semester long project, students first needed to obtain data
from different sources online. Our first stop for information was the US Department
of Transportation to get NTAD data on the railway network. The online downloading
platform for this site was easy to manage and provided quality metadata for the
datasets used. The next stop was to compile data for land cover, for this we
went to the USGS National map viewer. The map viewer allowed access to a great
abundance of different types of information about ground cover across the United
States. Specifically we are only interested in the NLCD (National Ground Cover
Data) for Tremealeau Co, WI.
The
third stop for data collection in this project is to the USDA, which has a
simple Data Gateways platform for downloading data. Here we downloaded the Crop
land data layer as well as the metadata for the set which we downloaded. The
next step in data acquisition was the Trempealeau County database and land
records. This county data base has a bunch of great information we will find
useful over the projects duration, because of this we will download the entire
database.
The
final data collection step we needed to accomplish was gathering Soil data,
which can be done from the USDA NRCS web soil survey (WSS). This dataset was
the trickiest to navigate and required more time to get the data into a useable
format for our project. After we downloaded the SSURGO data from the WSS, we
performed a join and relationship class to get the current soil information
into the TMP (Trempealeau, Co.) geo database we downloaded.
Now that we have all the Data in the right place on our
computers, we unzipped the files into a working folder. The unzipped files were
then looked at, and important features were extracted, in particular the Raster’s
for land cover and crop land cover. Now our data acquisition is complete, and
can begin the next portion of the project, which is to write a Python script to
project and clip/extract by mask the raster’s and load them into our
geodatabase.
Figure 1: Displays the three raster's which we downloaded from online platforms. A. displays the Landcover (USGS 2013), B. displays the Crop land cover (USDA 2011), C. displays the elevation model (DEM) from USGS. D. is a locator map for Trempealeau Co, WI.
Data Accuracy:
As part of downloading data, reviewing pertinent
metadata is very important. Accuracy standards hold great consequence for
projects in the real world. After the data was collected, we looked inside each
metadata set and compiled a chart of how accurate the data is and other
important lineage information. This documentation is important for credibility
and how confident you can be in the final project to contain real-world truth,
and be of use for the application.
Figure 2: A table of various accuracy information from the metadata compiled from the online sources.
No comments:
Post a Comment