More

Working with MODIS in R

Working with MODIS in R


I am new to using MODIS data and was hoping to get some help starting out. I have been able to download the hdf file but viewing it in R has been a problem for me. What I have found so far, and tried, is using the gdalUtils package. What I have done so far is this

gdal_translate('MOD11A1.A2012027.h11v04.005.2012028122822.hdf', 'test.tiff', sd_index=1)

This gave me a tif image that I then put back into R using

data <- readTIFF('test.tiff')

This gave me a matrix in R that I could work with, which is what I ideally want. The only issue I had was that the values in the matrix were all decimals and when I did an image plot the each axis went from 0 to 1 instead of the proper lat and lon. the file I am using is LST in kelvin so these values are not correct. I know there is gdalwarp that does the reprojection but I am unsure how to use it. The inputs for s_srs and t_srs I was unsure of.

Can anyone point me in the right direction as a beginner. I have been reading the Steve Mosher blog so I have MRT and OSGeo4w installed plus all the packages he suggested.


I would suggest using the MODIS reprojection tool (MRT - https://lpdaac.usgs.gov/tools/modis_reprojection_tool) to project and convert the data from HDF to TIFF. It's free (just need to create an account with NASA) and you'll be sure you're data is being transformed properly. Then you can work with your TIFF in R.


The latest comprehensive enterprise systems analysis & integration job descriptions.

Designs, develops and implements information systems and operations systems in support of network, communications and core business functions. Evaluates end user needs, client goals, budgets and existing applications to define system requirements and technical standards. May be responsible for drafting user guides and beta testing pre-release systems. Relies on extensive knowledge and professional discretion to achieve goals. Usually reports to a department head or senior management. Manages others. Significant ingenuity and flexibility is expected. Requires a bachelor’s degree and at least 7 years of relevant experience.

CRM Application Administrator

Responsible for administering the Customer Relationship Management (CRM) software. Responsible for maintaining the CRM system and performing necessary updates. Keeps track of enterprise-wide usage of the system and performs administrative tasks. Works under general supervision and usually reports to a manager, though some ingenuity and flexibility is required. Must have a bachelor’s degree in area of specialty and at least 6 years of relevant experience.

CRM Application Architect

Designs, develops and constructs Customer Relationship Management (CRM) application systems and consults with clients to meet application needs. Relies on extensive knowledge and professional discretion to achieve goals. Usually reports to a manager. Manages others. Significant ingenuity and flexibility is expected. Requires a bachelor’s degree in area of specialty and at least 7 years of relevant experience.

CRM Integration Specialist

Responsible for Customer Relationship Management (CRM) system integration and development. Ensures all functions of CRM system effectively work with all other applications and operating systems. Familiar with a variety of the field’s concepts, practices and procedures. Relies on extensive experience and judgment to plan and accomplish goals. Performs a variety of tasks. Leads and directs the work of others. A wide degree of creativity and latitude is expected. Typically reports to a manager or head of a unit/department. Requires a bachelor’s degree in area of specialty and at least 7 years of experience in the field or in a related area.

CRM Program Director

Leads the Customer Relationship Management (CRM) initiative for an organization. Reviews CRM project proposals to determine costs, timeline, funding, staffing requirements and goals. Relies on extensive knowledge and professional discretion to achieve goals. Usually reports to senior management. Manages others. Significant ingenuity and flexibility is expected. Requires a bachelor’s degree in area of specialty and at least 10 years of relevant experience.

ERP Administrator

Ensures optimal performance for Enterprise Resource Planning (ERP) systems. Implements, evaluates and designs ERP systems and applications. Troubleshoots ERP-related issues and monitors ERP systems security. Installs new releases, system upgrades and patches, as required. Relies on limited knowledge and professional discretion to achieve goals. Works under general supervision and usually reports to a manager, though some ingenuity and flexibility is required. May require a bachelor’s degree in a related area and 3-5 years of relevant experience.

ERP Analyst

Helps with the implementation and ongoing maintenance of the Enterprise Resource Planning (ERP) system. Tests ERP layout to ensure the system is meeting business needs. Customizes and configures workflow to facilitate ERP integration with other applications. Relies on knowledge and professional discretion to achieve goals. Significant ingenuity and flexibility is required. May require a bachelor’s degree and 0-6 years of relevant experience.

ERP Program Manager

Acts as liaison between key users of ERP system and ERP system developers. Finds solutions to process weaknesses and tests solutions. Manages timeline, resources, requirements traceability and overall operational project communication. Usually manages a team across disciplines. Requires a bachelor’s degree and a minimum of 5 years of relevant experience.

ERP Programmer

Evaluates, assesses and enhances the programming systems needed to support an organization’s Enterprise Resource Planning (ERP) applications. Ensures that other software can be fully integrated into the ERP system. Develops new modules to enhance system performance. Relies on knowledge and professional discretion to achieve goals. Usually reports to a supervisor. Significant ingenuity and flexibility is expected. May require a bachelor’s degree in area of specialty and 0-6 years of relevant experience.

ERP Project Manager

Designs, produces and executes the Enterprise Resource Planning (ERP) system. Establishes timelines, assigns resources and monitors ongoing progress. Assesses performance of ERP system and recommends enhancements. Relies on extensive knowledge and professional discretion to achieve goals. Usually reports to senior management. Manages a group of ERP Analysts. Significant ingenuity and flexibility is expected. Requires a bachelor’s degree in area of specialty and at least 8 years of relevant experience.

GIS Analyst

Utilizes Geographic Information System (GIS) techniques to better understand certain variables in a given geographic location. Extracts data from GIS software, performs analysis and offers detailed recommendations based on results. Provides maps and data sets to clients to supplement analysis. Knowledgeable of GIS software and technology. Works in conjunction with CAD drafters and technicians. Relies on knowledge and professional discretion to achieve goals. Usually reports to a supervisor or department head, though some ingenuity and flexibility is required. Requires a bachelor’s degree in area of specialty and 0-10 years of relevant experience.

Operating Systems Programmer

Evaluates, designs, implements and refines computer operating systems to meet business goals. Diagnoses, troubleshoots and documents related problems. Usually reports to a project leader, manager or department head. May require an associate degree or its equivalent and 0-10 years of relevant experience.

SAP Basis Consultant

Assists in the analysis, programming, design and implementation of SAP Basis systems. Develops SAP architecture requirements and specifications and ensures the system is meeting corporate needs. Relies on extensive knowledge and professional discretion to achieve goals. Typically reports to a department head. Significant ingenuity and flexibility is expected. Requires a bachelor’s degree in area of specialty and at least 7 years of relevant experience.

SAP Program Manager

Works directly with project teams to confirm requirements and scope projects. Ensures SAP projects are delivered within scope, time and budget requirements. Regularly provides manager with project status updates and conveys any complications. Usually reports to a department head. Requires a bachelor’s degree and 7-10 years of relevant experience.

SAP Project Manager

Manages all activities related to SAP implementation projects. Ensures that all SAP project goals are achieved. Relies on extensive knowledge and professional discretion to achieve goals. Typically reports to senior management. Manages others. Significant ingenuity and flexibility is expected. Requires a bachelor’s degree in area of specialty and at least 7 years of relevant experience.

Content management error: Generic Content Banners should not be placed in the Navigation placeholder!


MODIS Shows Earth is Greener

Over the last two decades, the Earth has seen an increase in foliage around the planet, measured in average leaf area per year on plants and trees. Data from NASA satellites shows that China and India are leading the increase in greening on land. The effect stems mainly from ambitious tree planting programs in China and intensive agriculture in both countries.Credits: NASA Earth Observatory

The world is literally a greener place than it was 20 years ago, and data from NASA satellites has revealed a counterintuitive source for much of this new foliage: China and India. A new study shows that the two emerging countries with the world’s biggest populations are leading the increase in greening on land. The effect stems mainly from ambitious tree planting programs in China and intensive agriculture in both countries.The greening phenomenon was first detected using satellite data in the mid-1990s by Ranga Myneni of Boston University and colleagues, but they did not know whether human activity was one of its chief, direct causes. This new insight was made possible by a nearly 20-year-long data record from a NASA instrument orbiting the Earth on two satellites. It’s called the Moderate Resolution Imaging Spectroradiometer, or MODIS, and its high-resolution data provides very accurate information, helping researchers work out details of what’s happening with Earth’s vegetation, down to the level of 500 meters, or about 1,600 feet, on the ground.

The world is a greener place than it was 20 years ago, as shown on this map, where areas with the greatest increase in foliage are indicated in dark green. Data from a NASA instrument orbiting Earth aboard two satellites show that human activity in China and India dominate this greening of the planet.Credits: NASA Earth Observatory

Taken all together, the greening of the planet over the last two decades represents an increase in leaf area on plants and trees equivalent to the area covered by all the Amazon rainforests. There are now more than two million square miles of extra green leaf area per year, compared to the early 2000s – a 5% increase.

“China and India account for one-third of the greening, but contain only 9% of the planet’s land area covered in vegetation – a surprising finding, considering the general notion of land degradation in populous countries from overexploitation,” said Chi Chen of the Department of Earth and Environment at Boston University, in Massachusetts, and lead author of the study.

An advantage of the MODIS satellite sensor is the intensive coverage it provides, both in space and time: MODIS has captured as many as four shots of every place on Earth, every day for the last 20 years.

“This long-term data lets us dig deeper,” said Rama Nemani, a research scientist at NASA’s Ames Research Center, in California’s Silicon Valley, and a co-author of the new work. “When the greening of the Earth was first observed, we thought it was due to a warmer, wetter climate and fertilization from the added carbon dioxide in the atmosphere, leading to more leaf growth in northern forests, for instance. Now, with the MODIS data that lets us understand the phenomenon at really small scales, we see that humans are also contributing.”

China’s outsized contribution to the global greening trend comes in large part (42%) from programs to conserve and expand forests. These were developed in an effort to reduce the effects of soil erosion, air pollution and climate change. Another 32% there – and 82% of the greening seen in India – comes from intensive cultivation of food crops.

Land area used to grow crops is comparable in China and India – more than 770,000 square miles – and has not changed much since the early 2000s. Yet these regions have greatly increased both their annual total green leaf area and their food production. This was achieved through multiple cropping practices, where a field is replanted to produce another harvest several times a year. Production of grains, vegetables, fruits and more have increased by about 35-40% since 2000 to feed their large populations.

How the greening trend may change in the future depends on numerous factors, both on a global scale and the local human level. For example, increased food production in India is facilitated by groundwater irrigation. If the groundwater is depleted, this trend may change.

“But, now that we know direct human influence is a key driver of the greening Earth, we need to factor this into our climate models,” Nemani said. “This will help scientists make better predictions about the behavior of different Earth systems, which will help countries make better decisions about how and when to take action.”

The researchers point out that the gain in greenness seen around the world and dominated by India and China does not offset the damage from loss of natural vegetation in tropical regions, such as Brazil and Indonesia. The consequences for sustainability and biodiversity in those ecosystems remain.

Overall, Nemani sees a positive message in the new findings. “Once people realize there’s a problem, they tend to fix it,” he said. “In the 70s and 80s in India and China, the situation around vegetation loss wasn’t good in the 90s, people realized it and today things have improved. Humans are incredibly resilient. That’s what we see in the satellite data.”

This research was published online, Feb. 11, 2019, in the journal Nature Sustainability.

Credits: NASA Earth Observatory

For news media:

Members of the news media interested in covering this topic should get in touch with the science representative on the NASA Ames media contacts page.

Author: Abby Tabor, NASA’s Ames Research Center, Silicon Valley


Contents

The phrase, "geographic information system", was coined by Roger Tomlinson in 1963, when he published the scientific paper, "A Geographic Information System for Regional Planning". [5] Tomlinson, acknowledged as the "father of GIS", [6] is credited with enabling the first computerized–GIS to be created through his work on the Canada Geographic Information System in 1963. Ultimately, Tomlinson created a framework for a database that was capable of storing and analyzing huge amounts of data leading to the Canadian government being able to implement its National Land-Use Management Program. [7] [6]

One of the first known instances in which spatial analysis was used, came from the field of epidemiology in the, "Rapport sur la marche et les effets du choléra dans Paris et le département de la Seine" (1832). [8] French geographer and cartographer, Charles Picquet, created a map outlining the forty-eight Districts in Paris, using halftone color gradients, to provide a visual representation for the number of reported deaths due to cholera, per every 1,000 inhabitants.

In 1854, John Snow, an epidemiologist and physician, was able to determine the source of a cholera outbreak in London through the use of spatial analysis. Snow achieved this through plotting the residence of each casualty on a map of the area, as well as the nearby water sources. Once these points were marked, he was able to identify the water source within the cluster that was responsible for the outbreak. This was one of the earliest successful uses of a geographic methodology in pinpointing the source of an outbreak in epidemiology. While the basic elements of topography and theme existed previously in cartography, Snow's map was unique due to his use of cartographic methods, not only to depict, but also to analyze clusters of geographically dependent phenomena.

The early 20th century saw the development of photozincography, which allowed maps to be split into layers, for example one layer for vegetation and another for water. This was particularly used for printing contours – drawing these was a labour-intensive task but having them on a separate layer meant they could be worked on without the other layers to confuse the draughtsman. This work was originally drawn on glass plates but later plastic film was introduced, with the advantages of being lighter, using less storage space and being less brittle, among others. When all the layers were finished, they were combined into one image using a large process camera. Once color printing came in, the layers idea was also used for creating separate printing plates for each color. While the use of layers much later became one of the main typical features of a contemporary GIS, the photographic process just described is not considered to be a GIS in itself – as the maps were just images with no database to link them to.

Two additional developments are notable in the early days of GIS: Ian McHarg's publication "Design with Nature" [9] and its map overlay method and the introduction of a street network into the U.S. Census Bureau's DIME (Dual Independent Map Encoding) system. [10]

Computer hardware development spurred by nuclear weapon research led to general-purpose computer "mapping" applications by the early 1960s. [11]

In 1960 the world's first true operational GIS was developed in Ottawa, Ontario, Canada, by the federal Department of Forestry and Rural Development. Developed by Dr. Roger Tomlinson, it was called the Canada Geographic Information System (CGIS) and was used to store, analyze, and manipulate data collected for the Canada Land Inventory – an effort to determine the land capability for rural Canada by mapping information about soils, agriculture, recreation, wildlife, waterfowl, forestry and land use at a scale of 1:50,000. A rating classification factor was also added to permit analysis.

CGIS was an improvement over "computer mapping" applications as it provided capabilities for overlay, measurement, and digitizing/scanning. It supported a national coordinate system that spanned the continent, coded lines as arcs having a true embedded topology and it stored the attribute and locational information in separate files. As a result of this, Tomlinson has become known as the "father of GIS", particularly for his use of overlays in promoting the spatial analysis of convergent geographic data. [12]

CGIS lasted into the 1990s and built a large digital land resource database in Canada. It was developed as a mainframe-based system in support of federal and provincial resource planning and management. Its strength was continent-wide analysis of complex datasets. The CGIS was never available commercially.

In 1964 Howard T. Fisher formed the Laboratory for Computer Graphics and Spatial Analysis at the Harvard Graduate School of Design (LCGSA 1965–1991), where a number of important theoretical concepts in spatial data handling were developed, and which by the 1970s had distributed seminal software code and systems, such as SYMAP, GRID, and ODYSSEY – that served as sources for subsequent commercial development—to universities, research centers and corporations worldwide. [13]

By the late 1970s two public domain GIS systems (MOSS and GRASS GIS) were in development, and by the early 1980s, M&S Computing (later Intergraph) along with Bentley Systems Incorporated for the CAD platform, Environmental Systems Research Institute (ESRI), CARIS (Computer Aided Resource Information System), MapInfo Corporation and ERDAS (Earth Resource Data Analysis System) emerged as commercial vendors of GIS software, successfully incorporating many of the CGIS features, combining the first generation approach to separation of spatial and attribute information with a second generation approach to organizing attribute data into database structures. [14]

In 1986, Mapping Display and Analysis System (MIDAS), the first desktop GIS product [15] was released for the DOS operating system. This was renamed in 1990 to MapInfo for Windows when it was ported to the Microsoft Windows platform. This began the process of moving GIS from the research department into the business environment.

By the end of the 20th century, the rapid growth in various systems had been consolidated and standardized on relatively few platforms and users were beginning to explore viewing GIS data over the Internet, requiring data format and transfer standards. More recently, a growing number of free, open-source GIS packages run on a range of operating systems and can be customized to perform specific tasks. Increasingly geospatial data and mapping applications are being made available via the World Wide Web (see List of GIS software § GIS as a service). [16]

Modern GIS technologies use digital information, for which various digitized data creation methods are used. The most common method of data creation is digitization, where a hard copy map or survey plan is transferred into a digital medium through the use of a CAD program, and geo-referencing capabilities. With the wide availability of ortho-rectified imagery (from satellites, aircraft, Helikites and UAVs), heads-up digitizing is becoming the main avenue through which geographic data is extracted. Heads-up digitizing involves the tracing of geographic data directly on top of the aerial imagery instead of by the traditional method of tracing the geographic form on a separate digitizing tablet (heads-down digitizing). Heads-down digitizing, or manual digitizing, uses a special magnetic pen, or stylus, that feeds information into a computer to create an identical, digital map. Some tablets use a mouse-like tool, called a puck, instead of a stylus. [17] [18] The puck has a small window with cross-hairs which allows for greater precision and pinpointing map features. Though heads-up digitizing is more commonly used, heads-down digitizing is still useful for digitizing maps of poor quality. [18]

Geoprocessing is a GIS operation used to manipulate spatial data. A typical geoprocessing operation takes an input dataset, performs an operation on that dataset, and returns the result of the operation as an output dataset. Common geoprocessing operations include geographic feature overlay, feature selection and analysis, topology processing, raster processing, and data conversion. Geoprocessing allows for definition, management, and analysis of information used to form decisions. [19]

Relating information from different sources Edit

GIS uses spatio-temporal (space-time) location as the key index variable for all other information. Just as a relational database containing text or numbers can relate many different tables using common key index variables, GIS can relate otherwise unrelated information by using location as the key index variable. The key is the location and/or extent in space-time.

Any variable that can be located spatially, and increasingly also temporally, can be referenced using a GIS. Locations or extents in Earth space–time may be recorded as dates/times of occurrence, and x, y, and z coordinates representing, longitude, latitude, and elevation, respectively. These GIS coordinates may represent other quantified systems of temporo-spatial reference (for example, film frame number, stream gage station, highway mile-marker, surveyor benchmark, building address, street intersection, entrance gate, water depth sounding, POS or CAD drawing origin/units). Units applied to recorded temporal-spatial data can vary widely (even when using exactly the same data, see map projections), but all Earth-based spatial–temporal location and extent references should, ideally, be relatable to one another and ultimately to a "real" physical location or extent in space–time.

Related by accurate spatial information, an incredible variety of real-world and projected past or future data can be analyzed, interpreted and represented. [20] This key characteristic of GIS has begun to open new avenues of scientific inquiry into behaviors and patterns of real-world information that previously had not been systematically correlated.

GIS uncertainties Edit

GIS accuracy depends upon source data, and how it is encoded to be data referenced. Land surveyors have been able to provide a high level of positional accuracy utilizing the GPS-derived positions. [21] High-resolution digital terrain and aerial imagery, [22] powerful computers and Web technology are changing the quality, utility, and expectations of GIS to serve society on a grand scale, but nevertheless there are other source data that affect overall GIS accuracy like paper maps, though these may be of limited use in achieving the desired accuracy.

In developing a digital topographic database for a GIS, topographical maps are the main source, and aerial photography and satellite imagery are extra sources for collecting data and identifying attributes which can be mapped in layers over a location facsimile of scale. The scale of a map and geographical rendering area representation type, or map projection, are very important aspects since the information content depends mainly on the scale set and resulting locatability of the map's representations. In order to digitize a map, the map has to be checked within theoretical dimensions, then scanned into a raster format, and resulting raster data has to be given a theoretical dimension by a rubber sheeting/warping technology process known as georeferencing.

A quantitative analysis of maps brings accuracy issues into focus. The electronic and other equipment used to make measurements for GIS is far more precise than the machines of conventional map analysis. All geographical data are inherently inaccurate, and these inaccuracies will propagate through GIS operations in ways that are difficult to predict. [23]

Data representation Edit

GIS data represents real objects (such as roads, land use, elevation, trees, waterways, etc.) with digital data determining the mix. Real objects can be divided into two abstractions: discrete objects (e.g., a house) and continuous fields (such as rainfall amount, or elevations). Traditionally, there are two broad methods used to store data in a GIS for both kinds of abstractions mapping references: raster images and vector. Points, lines, and polygons represent vector data of mapped location attribute references.

A new hybrid method of storing data is that of identifying point clouds, which combine three-dimensional points with RGB information at each point, returning a "3D color image". GIS thematic maps then are becoming more and more realistically visually descriptive of what they set out to show or determine.

For a list of popular GIS file formats, such as shapefiles, see GIS file formats § Popular GIS file formats.

Data capture Edit

Data capture—entering information into the system—consumes much of the time of GIS practitioners. There are a variety of methods used to enter data into a GIS where it is stored in a digital format.

Existing data printed on paper or PET film maps can be digitized or scanned to produce digital data. A digitizer produces vector data as an operator traces points, lines, and polygon boundaries from a map. Scanning a map results in raster data that could be further processed to produce vector data.

Survey data can be directly entered into a GIS from digital data collection systems on survey instruments using a technique called coordinate geometry (COGO). Positions from a global navigation satellite system (GNSS) like Global Positioning System can also be collected and then imported into a GIS. A current trend in data collection gives users the ability to utilize field computers with the ability to edit live data using wireless connections or disconnected editing sessions. [24] This has been enhanced by the availability of low-cost mapping-grade GPS units with decimeter accuracy in real time. This eliminates the need to post process, import, and update the data in the office after fieldwork has been collected. This includes the ability to incorporate positions collected using a laser rangefinder. New technologies also allow users to create maps as well as analysis directly in the field, making projects more efficient and mapping more accurate.

Remotely sensed data also plays an important role in data collection and consist of sensors attached to a platform. Sensors include cameras, digital scanners and lidar, while platforms usually consist of aircraft and satellites. In England in the mid 1990s, hybrid kite/balloons called helikites first pioneered the use of compact airborne digital cameras as airborne geo-information systems. Aircraft measurement software, accurate to 0.4 mm was used to link the photographs and measure the ground. Helikites are inexpensive and gather more accurate data than aircraft. Helikites can be used over roads, railways and towns where unmanned aerial vehicles (UAVs) are banned.

Recently aerial data collection has become more accessible with miniature UAVs and drones. For example, the Aeryon Scout was used to map a 50-acre area with a ground sample distance of 1 inch (2.54 cm) in only 12 minutes. [25]

The majority of digital data currently comes from photo interpretation of aerial photographs. Soft-copy workstations are used to digitize features directly from stereo pairs of digital photographs. These systems allow data to be captured in two and three dimensions, with elevations measured directly from a stereo pair using principles of photogrammetry. Analog aerial photos must be scanned before being entered into a soft-copy system, for high-quality digital cameras this step is skipped.

Satellite remote sensing provides another important source of spatial data. Here satellites use different sensor packages to passively measure the reflectance from parts of the electromagnetic spectrum or radio waves that were sent out from an active sensor such as radar. Remote sensing collects raster data that can be further processed using different bands to identify objects and classes of interest, such as land cover.

Web mining is a novel method of collecting spatial data. Researchers build a web crawler application to aggregate required spatial data from the web. [26] For example, the exact geo-location or the neighborhood of apartments can be collected from online real estate listing websites.

When data is captured, the user should consider if the data should be captured with either a relative accuracy or absolute accuracy, since this could not only influence how information will be interpreted but also the cost of data capture.

After entering data into a GIS, the data usually requires editing, to remove errors, or further processing. For vector data it must be made "topologically correct" before it can be used for some advanced analysis. For example, in a road network, lines must connect with nodes at an intersection. Errors such as undershoots and overshoots must also be removed. For scanned maps, blemishes on the source map may need to be removed from the resulting raster. For example, a fleck of dirt might connect two lines that should not be connected.

Raster-to-vector translation Edit

Data restructuring can be performed by a GIS to convert data into different formats. For example, a GIS may be used to convert a satellite image map to a vector structure by generating lines around all cells with the same classification, while determining the cell spatial relationships, such as adjacency or inclusion.

More advanced data processing can occur with image processing, a technique developed in the late 1960s by NASA and the private sector to provide contrast enhancement, false color rendering and a variety of other techniques including use of two dimensional Fourier transforms. Since digital data is collected and stored in various ways, the two data sources may not be entirely compatible. So a GIS must be able to convert geographic data from one structure to another. In so doing, the implicit assumptions behind different ontologies and classifications require analysis. [27] Object ontologies have gained increasing prominence as a consequence of object-oriented programming and sustained work by Barry Smith and co-workers.

Projections, coordinate systems, and registration Edit

The earth can be represented by various models, each of which may provide a different set of coordinates (e.g., latitude, longitude, elevation) for any given point on the Earth's surface. The simplest model is to assume the earth is a perfect sphere. As more measurements of the earth have accumulated, the models of the earth have become more sophisticated and more accurate. In fact, there are models called datums that apply to different areas of the earth to provide increased accuracy, like North American Datum of 1983 for U.S. measurements, and the World Geodetic System for worldwide measurements.

The latitude and longitude on a map made against a local datum may not be the same as one obtained from a GPS receiver. Converting coordinates from one datum to another requires a datum transformation such as a Helmert transformation, although in certain situations a simple translation may be sufficient. [28]

In popular GIS software, data projected in latitude/longitude is often represented as a Geographic coordinate system. For example, data in latitude/longitude if the datum is the 'North American Datum of 1983' is denoted by 'GCS North American 1983'.

GIS spatial analysis is a rapidly changing field, and GIS packages are increasingly including analytical tools as standard built-in facilities, as optional toolsets, as add-ins or 'analysts'. In many instances these are provided by the original software suppliers (commercial vendors or collaborative non commercial development teams), while in other cases facilities have been developed and are provided by third parties. Furthermore, many products offer software development kits (SDKs), programming languages and language support, scripting facilities and/or special interfaces for developing one's own analytical tools or variants. The increased availability has created a new dimension to business intelligence termed "spatial intelligence" which, when openly delivered via intranet, democratizes access to geographic and social network data. Geospatial intelligence, based on GIS spatial analysis, has also become a key element for security. GIS as a whole can be described as conversion to a vectorial representation or to any other digitisation process.

Slope and aspect Edit

Slope can be defined as the steepness or gradient of a unit of terrain, usually measured as an angle in degrees or as a percentage. Aspect can be defined as the direction in which a unit of terrain faces. Aspect is usually expressed in degrees from north. Slope, aspect, and surface curvature in terrain analysis are all derived from neighborhood operations using elevation values of a cell's adjacent neighbours. [29] Slope is a function of resolution, and the spatial resolution used to calculate slope and aspect should always be specified. [30] Various authors have compared techniques for calculating slope and aspect. [31] [32] [33]

The following method can be used to derive slope and aspect:
The elevation at a point or unit of terrain will have perpendicular tangents (slope) passing through the point, in an east–west and north–south direction. These two tangents give two components, ∂z/∂x and ∂z/∂y, which then be used to determine the overall direction of slope, and the aspect of the slope. The gradient is defined as a vector quantity with components equal to the partial derivatives of the surface in the x and y directions. [34]

The calculation of the overall 3×3 grid slope S and aspect A for methods that determine east–west and north–south component use the following formulas respectively:

Zhou and Liu [33] describe another formula for calculating aspect, as follows:

Data analysis Edit

It is difficult to relate wetlands maps to rainfall amounts recorded at different points such as airports, television stations, and schools. A GIS, however, can be used to depict two- and three-dimensional characteristics of the Earth's surface, subsurface, and atmosphere from information points. For example, a GIS can quickly generate a map with isopleth or contour lines that indicate differing amounts of rainfall. Such a map can be thought of as a rainfall contour map. Many sophisticated methods can estimate the characteristics of surfaces from a limited number of point measurements. A two-dimensional contour map created from the surface modeling of rainfall point measurements may be overlaid and analyzed with any other map in a GIS covering the same area. This GIS derived map can then provide additional information - such as the viability of water power potential as a renewable energy source. Similarly, GIS can be used to compare other renewable energy resources to find the best geographic potential for a region. [35]

Additionally, from a series of three-dimensional points, or digital elevation model, isopleth lines representing elevation contours can be generated, along with slope analysis, shaded relief, and other elevation products. Watersheds can be easily defined for any given reach, by computing all of the areas contiguous and uphill from any given point of interest. Similarly, an expected thalweg of where surface water would want to travel in intermittent and permanent streams can be computed from elevation data in the GIS.

Topological modeling Edit

A GIS can recognize and analyze the spatial relationships that exist within digitally stored spatial data. These topological relationships allow complex spatial modelling and analysis to be performed. Topological relationships between geometric entities traditionally include adjacency (what adjoins what), containment (what encloses what), and proximity (how close something is to something else).

Geometric networks Edit

Geometric networks are linear networks of objects that can be used to represent interconnected features, and to perform special spatial analysis on them. A geometric network is composed of edges, which are connected at junction points, similar to graphs in mathematics and computer science. Just like graphs, networks can have weight and flow assigned to its edges, which can be used to represent various interconnected features more accurately. Geometric networks are often used to model road networks and public utility networks, such as electric, gas, and water networks. Network modeling is also commonly employed in transportation planning, hydrology modeling, and infrastructure modeling.

Hydrological modeling Edit

GIS hydrological models can provide a spatial element that other hydrological models lack, with the analysis of variables such as slope, aspect and watershed or catchment area. [37] Terrain analysis is fundamental to hydrology, since water always flows down a slope. [37] As basic terrain analysis of a digital elevation model (DEM) involves calculation of slope and aspect, DEMs are very useful for hydrological analysis. Slope and aspect can then be used to determine direction of surface runoff, and hence flow accumulation for the formation of streams, rivers and lakes. Areas of divergent flow can also give a clear indication of the boundaries of a catchment. Once a flow direction and accumulation matrix has been created, queries can be performed that show contributing or dispersal areas at a certain point. [37] More detail can be added to the model, such as terrain roughness, vegetation types and soil types, which can influence infiltration and evapotranspiration rates, and hence influencing surface flow. One of the main uses of hydrological modeling is in environmental contamination research. Other applications of hydrological modeling include groundwater and surface water mapping, as well as flood risk maps.

Cartographic modeling Edit

Dana Tomlin probably coined the term "cartographic modeling" in his PhD dissertation (1983) he later used it in the title of his book, Geographic Information Systems and Cartographic Modeling (1990). [38] Cartographic modeling refers to a process where several thematic layers of the same area are produced, processed, and analyzed. Tomlin used raster layers, but the overlay method (see below) can be used more generally. Operations on map layers can be combined into algorithms, and eventually into simulation or optimization models.

Map overlay Edit

The combination of several spatial datasets (points, lines, or polygons) creates a new output vector dataset, visually similar to stacking several maps of the same region. These overlays are similar to mathematical Venn diagram overlays. A union overlay combines the geographic features and attribute tables of both inputs into a single new output. An intersect overlay defines the area where both inputs overlap and retains a set of attribute fields for each. A symmetric difference overlay defines an output area that includes the total area of both inputs except for the overlapping area.

Data extraction is a GIS process similar to vector overlay, though it can be used in either vector or raster data analysis. Rather than combining the properties and features of both datasets, data extraction involves using a "clip" or "mask" to extract the features of one data set that fall within the spatial extent of another dataset.

In raster data analysis, the overlay of datasets is accomplished through a process known as "local operation on multiple rasters" or "map algebra", through a function that combines the values of each raster's matrix. This function may weigh some inputs more than others through use of an "index model" that reflects the influence of various factors upon a geographic phenomenon.

Geostatistics Edit

Geostatistics is a branch of statistics that deals with field data, spatial data with a continuous index. It provides methods to model spatial correlation, and predict values at arbitrary locations (interpolation).

When phenomena are measured, the observation methods dictate the accuracy of any subsequent analysis. Due to the nature of the data (e.g. traffic patterns in an urban environment weather patterns over the Pacific Ocean), a constant or dynamic degree of precision is always lost in the measurement. This loss of precision is determined from the scale and distribution of the data collection.

To determine the statistical relevance of the analysis, an average is determined so that points (gradients) outside of any immediate measurement can be included to determine their predicted behavior. This is due to the limitations of the applied statistic and data collection methods, and interpolation is required to predict the behavior of particles, points, and locations that are not directly measurable.

Interpolation is the process by which a surface is created, usually a raster dataset, through the input of data collected at a number of sample points. There are several forms of interpolation, each which treats the data differently, depending on the properties of the data set. In comparing interpolation methods, the first consideration should be whether or not the source data will change (exact or approximate). Next is whether the method is subjective, a human interpretation, or objective. Then there is the nature of transitions between points: are they abrupt or gradual. Finally, there is whether a method is global (it uses the entire data set to form the model), or local where an algorithm is repeated for a small section of terrain.

Interpolation is a justified measurement because of a spatial autocorrelation principle that recognizes that data collected at any position will have a great similarity to, or influence of those locations within its immediate vicinity.

Address geocoding Edit

Geocoding is interpolating spatial locations (X,Y coordinates) from street addresses or any other spatially referenced data such as ZIP Codes, parcel lots and address locations. A reference theme is required to geocode individual addresses, such as a road centerline file with address ranges. The individual address locations have historically been interpolated, or estimated, by examining address ranges along a road segment. These are usually provided in the form of a table or database. The software will then place a dot approximately where that address belongs along the segment of centerline. For example, an address point of 500 will be at the midpoint of a line segment that starts with address 1 and ends with address 1,000. Geocoding can also be applied against actual parcel data, typically from municipal tax maps. In this case, the result of the geocoding will be an actually positioned space as opposed to an interpolated point. This approach is being increasingly used to provide more precise location information.

Reverse geocoding Edit

Reverse geocoding is the process of returning an estimated street address number as it relates to a given coordinate. For example, a user can click on a road centerline theme (thus providing a coordinate) and have information returned that reflects the estimated house number. This house number is interpolated from a range assigned to that road segment. If the user clicks at the midpoint of a segment that starts with address 1 and ends with 100, the returned value will be somewhere near 50. Note that reverse geocoding does not return actual addresses, only estimates of what should be there based on the predetermined range.

Multi-criteria decision analysis Edit

Coupled with GIS, multi-criteria decision analysis methods support decision-makers in analysing a set of alternative spatial solutions, such as the most likely ecological habitat for restoration, against multiple criteria, such as vegetation cover or roads. MCDA uses decision rules to aggregate the criteria, which allows the alternative solutions to be ranked or prioritised. [39] GIS MCDA may reduce costs and time involved in identifying potential restoration sites.

Data output and cartography Edit

Cartography is the design and production of maps, or visual representations of spatial data. The vast majority of modern cartography is done with the help of computers, usually using GIS but production of quality cartography is also achieved by importing layers into a design program to refine it. Most GIS software gives the user substantial control over the appearance of the data.

Cartographic work serves two major functions:

First, it produces graphics on the screen or on paper that convey the results of analysis to the people who make decisions about resources. Wall maps and other graphics can be generated, allowing the viewer to visualize and thereby understand the results of analyses or simulations of potential events. Web Map Servers facilitate distribution of generated maps through web browsers using various implementations of web-based application programming interfaces (AJAX, Java, Flash, etc.).

Second, other database information can be generated for further analysis or use. An example would be a list of all addresses within one mile (1.6 km) of a toxic spill.

Graphic display techniques Edit

Traditional maps are abstractions of the real world, a sampling of important elements portrayed on a sheet of paper with symbols to represent physical objects. People who use maps must interpret these symbols. Topographic maps show the shape of land surface with contour lines or with shaded relief.

Today, graphic display techniques such as shading based on altitude in a GIS can make relationships among map elements visible, heightening one's ability to extract and analyze information. For example, two types of data were combined in a GIS to produce a perspective view of a portion of San Mateo County, California.

  • The digital elevation model, consisting of surface elevations recorded on a 30-meter horizontal grid, shows high elevations as white and low elevation as black.
  • The accompanying Landsat Thematic Mapper image shows a false-color infrared image looking down at the same area in 30-meter pixels, or picture elements, for the same coordinate points, pixel by pixel, as the elevation information.

A GIS was used to register and combine the two images to render the three-dimensional perspective view looking down the San Andreas Fault, using the Thematic Mapper image pixels, but shaded using the elevation of the landforms. The GIS display depends on the viewing point of the observer and time of day of the display, to properly render the shadows created by the sun's rays at that latitude, longitude, and time of day.

An archeochrome is a new way of displaying spatial data. It is a thematic on a 3D map that is applied to a specific building or a part of a building. It is suited to the visual display of heat-loss data.

Spatial ETL Edit

Spatial ETL tools provide the data processing functionality of traditional extract, transform, load (ETL) software, but with a primary focus on the ability to manage spatial data. They provide GIS users with the ability to translate data between different standards and proprietary formats, whilst geometrically transforming the data en route. These tools can come in the form of add-ins to existing wider-purpose software such as spreadsheets.

GIS data mining Edit

GIS or spatial data mining is the application of data mining methods to spatial data. Data mining, which is the partially automated search for hidden patterns in large databases, offers great potential benefits for applied GIS-based decision making. Typical applications include environmental monitoring. A characteristic of such applications is that spatial correlation between data measurements require the use of specialized algorithms for more efficient data analysis. [40]

Since its origin in the 1960s, GIS has been used in an ever-increasing range of applications, corroborating the widespread importance of location and aided by the continuing reduction in the barriers to adopting geospatial technology. The perhaps hundreds of different uses of GIS can be classified in several ways:

  • Goal: the purpose of an application can be broadly classified as either scientific research or resource management. The purpose of research, defined as broadly as possible, is to discover new knowledge this may be performed by someone who considers herself a scientist, but may also be done by anyone who is trying to learn why the world appears to work the way it does. A study as practical as deciphering why a business location has failed would be research in this sense. Management (sometimes called operational applications), also defined as broadly as possible, is the application of knowledge to make practical decisions on how to employ the resources one has control over to achieve one's goals. These resources could be time, capital, labor, equipment, land, mineral deposits, wildlife, and so on. [41] : 791
    • Decision level: Management applications have been further classified as strategic, tactical, operational, a common classification in business management. [42] Strategic tasks are long-term, visionary decisions about what goals one should have, such as whether a business should expand or not. Tactical tasks are medium-term decisions about how to achieve strategic goals, such as a national forest creating a grazing management plan. Operational decisions are concerned with the day-to-day tasks, such as a person finding the shortest route to a pizza restaurant.

    The implementation of a GIS is often driven by jurisdictional (such as a city), purpose, or application requirements. Generally, a GIS implementation may be custom-designed for an organization. Hence, a GIS deployment developed for an application, jurisdiction, enterprise, or purpose may not be necessarily interoperable or compatible with a GIS that has been developed for some other application, jurisdiction, enterprise, or purpose. [48]

    GIS is also diverging into location-based services, which allows GPS-enabled mobile devices to display their location in relation to fixed objects (nearest restaurant, gas station, fire hydrant) or mobile objects (friends, children, police car), or to relay their position back to a central server for display or other processing.

    Open Geospatial Consortium standards Edit

    The Open Geospatial Consortium (OGC) is an international industry consortium of 384 companies, government agencies, universities, and individuals participating in a consensus process to develop publicly available geoprocessing specifications. Open interfaces and protocols defined by OpenGIS Specifications support interoperable solutions that "geo-enable" the Web, wireless and location-based services, and mainstream IT, and empower technology developers to make complex spatial information and services accessible and useful with all kinds of applications. Open Geospatial Consortium protocols include Web Map Service, and Web Feature Service. [49]

    GIS products are broken down by the OGC into two categories, based on how completely and accurately the software follows the OGC specifications.

    Compliant Products are software products that comply to OGC's OpenGIS Specifications. When a product has been tested and certified as compliant through the OGC Testing Program, the product is automatically registered as "compliant" on this site.

    Implementing Products are software products that implement OpenGIS Specifications but have not yet passed a compliance test. Compliance tests are not available for all specifications. Developers can register their products as implementing draft or approved specifications, though OGC reserves the right to review and verify each entry.

    Web mapping Edit

    In recent years there has been a proliferation of free-to-use and easily accessible mapping software such as the proprietary web applications Google Maps and Bing Maps, as well as the free and open-source alternative OpenStreetMap. These services give the public access to huge amounts of geographic data, perceived by many users to be as trustworthy and usable as professional information. [50]

    Some of them, like Google Maps and OpenLayers, expose an application programming interface (API) that enable users to create custom applications. These toolkits commonly offer street maps, aerial/satellite imagery, geocoding, searches, and routing functionality. Web mapping has also uncovered the potential of crowdsourcing geodata in projects like OpenStreetMap, which is a collaborative project to create a free editable map of the world. These mashup projects have been proven to provide a high level of value and benefit to end users outside that possible through traditional geographic information. [51] [52]

    Adding the dimension of time Edit

    The condition of the Earth's surface, atmosphere, and subsurface can be examined by feeding satellite data into a GIS. GIS technology gives researchers the ability to examine the variations in Earth processes over days, months, and years. As an example, the changes in vegetation vigor through a growing season can be animated to determine when drought was most extensive in a particular region. The resulting graphic represents a rough measure of plant health. Working with two variables over time would then allow researchers to detect regional differences in the lag between a decline in rainfall and its effect on vegetation.

    GIS technology and the availability of digital data on regional and global scales enable such analyses. The satellite sensor output used to generate a vegetation graphic is produced for example by the advanced very-high-resolution radiometer (AVHRR). This sensor system detects the amounts of energy reflected from the Earth's surface across various bands of the spectrum for surface areas of about 1 square kilometer. The satellite sensor produces images of a particular location on the Earth twice a day. AVHRR and more recently the moderate-resolution imaging spectroradiometer (MODIS) are only two of many sensor systems used for Earth surface analysis.

    In addition to the integration of time in environmental studies, GIS is also being explored for its ability to track and model the progress of humans throughout their daily routines. A concrete example of progress in this area is the recent release of time-specific population data by the U.S. Census. In this data set, the populations of cities are shown for daytime and evening hours highlighting the pattern of concentration and dispersion generated by North American commuting patterns. The manipulation and generation of data required to produce this data would not have been possible without GIS.

    Using models to project the data held by a GIS forward in time have enabled planners to test policy decisions using spatial decision support systems.

    Tools and technologies emerging from the World Wide Web Consortium's Semantic Web are proving useful for data integration problems in information systems. Correspondingly, such technologies have been proposed as a means to facilitate interoperability and data reuse among GIS applications. [53] [54] and also to enable new analysis mechanisms. [55]

    Ontologies are a key component of this semantic approach as they allow a formal, machine-readable specification of the concepts and relationships in a given domain. This in turn allows a GIS to focus on the intended meaning of data rather than its syntax or structure. For example, reasoning that a land cover type classified as deciduous needleleaf trees in one dataset is a specialization or subset of land cover type forest in another more roughly classified dataset can help a GIS automatically merge the two datasets under the more general land cover classification. Tentative ontologies have been developed in areas related to GIS applications, for example the hydrology ontology [56] developed by the Ordnance Survey in the United Kingdom and the SWEET ontologies [57] developed by NASA's Jet Propulsion Laboratory. Also, simpler ontologies and semantic metadata standards are being proposed by the W3C Geo Incubator Group [58] to represent geospatial data on the web. GeoSPARQL is a standard developed by the Ordnance Survey, United States Geological Survey, Natural Resources Canada, Australia's Commonwealth Scientific and Industrial Research Organisation and others to support ontology creation and reasoning using well-understood OGC literals (GML, WKT), topological relationships (Simple Features, RCC8, DE-9IM), RDF and the SPARQL database query protocols.

    Recent research results in this area can be seen in the International Conference on Geospatial Semantics [59] and the Terra Cognita – Directions to the Geospatial Semantic Web [60] workshop at the International Semantic Web Conference.

    With the popularization of GIS in decision making, scholars have begun to scrutinize the social and political implications of GIS. [61] [62] [50] GIS can also be misused to distort reality for individual and political gain. [63] [64] It has been argued that the production, distribution, utilization, and representation of geographic information are largely related with the social context and has the potential to increase citizen trust in government. [65] Other related topics include discussion on copyright, privacy, and censorship. A more optimistic social approach to GIS adoption is to use it as a tool for public participation.

    In education Edit

    At the end of the 20th century, GIS began to be recognized as tools that could be used in the classroom. [66] [67] [68] [69] The benefits of GIS in education seem focused on developing spatial thinking, but there is not enough bibliography or statistical data to show the concrete scope of the use of GIS in education around the world, although the expansion has been faster in those countries where the curriculum mentions them. [70] : 36

    GIS seem to provide many advantages in teaching geography because they allow for analyses based on real geographic data and also help raise many research questions from teachers and students in classrooms, as well as they contribute to improvement in learning by developing spatial and geographical thinking and, in many cases, student motivation. [70] : 38

    In local government Edit

    GIS is proven as an organization-wide, enterprise and enduring technology that continues to change how local government operates. [71] Government agencies have adopted GIS technology as a method to better manage the following areas of government organization:

    • Economic Development departments use interactive GIS mapping tools, aggregated with other data (demographics, labor force, business, industry, talent) along with a database of available commercial sites and buildings in order to attract investment and support existing business. Businesses making location decisions can use the tools to choose communities and sites that best match their criteria for success. GIS Planning is the industry's leading vendor of GIS data web tools for economic development and investment attraction. A service from the Financial Times, GIS Planning''s ZoomProspector Enterprise and Intelligence Components software are in use around the world. This includes 30 US statewide economic development organizations, the majority of the top 100 metro areas in North America and a number of investment attraction agencies in Europe and Latin America.
    • Public Safety [72] operations such as Emergency Operations Centers, Fire Prevention, Police and Sheriff mobile technology and dispatch, and mapping weather risks.
    • Parks and Recreation departments and their functions in asset inventory, land conservation, land management, and cemetery management.
    • Public Works and Utilities, tracking water and stormwater drainage, electrical assets, engineering projects, and public transportation assets and trends.
    • Fiber Network Management for interdepartmental network assets
    • School analytical and demographic data, asset management, and improvement/expansion planning
    • Public Administration for election data, property records, and zoning/management.

    The Open Data initiative is pushing local government to take advantage of technology such as GIS technology, as it encompasses the requirements to fit the Open Data/Open Government model of transparency. [71] With Open Data, local government organizations can implement Citizen Engagement applications and online portals, allowing citizens to see land information, report potholes and signage issues, view and sort parks by assets, view real-time crime rates and utility repairs, and much more. [73] [74] The push for open data within government organizations is driving the growth in local government GIS technology spending, and database management.


    Contents

    Passive sensors gather radiation that is emitted or reflected by the object or surrounding areas. Reflected sunlight is the most common source of radiation measured by passive sensors. Examples of passive remote sensors include film photography, infrared, charge-coupled devices, and radiometers. Active collection, on the other hand, emits energy in order to scan objects and areas whereupon a sensor then detects and measures the radiation that is reflected or backscattered from the target. RADAR and LiDAR are examples of active remote sensing where the time delay between emission and return is measured, establishing the location, speed and direction of an object.

    Remote sensing makes it possible to collect data of dangerous or inaccessible areas. Remote sensing applications include monitoring deforestation in areas such as the Amazon Basin, glacial features in Arctic and Antarctic regions, and depth sounding of coastal and ocean depths. Military collection during the Cold War made use of stand-off collection of data about dangerous border areas. Remote sensing also replaces costly and slow data collection on the ground, ensuring in the process that areas or objects are not disturbed.

    Orbital platforms collect and transmit data from different parts of the electromagnetic spectrum, which in conjunction with larger scale aerial or ground-based sensing and analysis, provides researchers with enough information to monitor trends such as El Niño and other natural long and short term phenomena. Other uses include different areas of the earth sciences such as natural resource management, agricultural fields such as land usage and conservation, [6] [7] oil spill detection and monitoring, [8] and national security and overhead, ground-based and stand-off collection on border areas. [9]

    The basis for multispectral collection and analysis is that of examined areas or objects that reflect or emit radiation that stand out from surrounding areas. For a summary of major remote sensing satellite systems see the overview table.

    Applications of remote sensing Edit

    • Conventional radar is mostly associated with aerial traffic control, early warning, and certain large-scale meteorological data. Doppler radar is used by local law enforcements' monitoring of speed limits and in enhanced meteorological collection such as wind speed and direction within weather systems in addition to precipitation location and intensity. Other types of active collection includes plasmas in the ionosphere. Interferometric synthetic aperture radar is used to produce precise digital elevation models of large scale terrain (See RADARSAT, TerraSAR-X, Magellan).
    • Laser and radaraltimeters on satellites have provided a wide range of data. By measuring the bulges of water caused by gravity, they map features on the seafloor to a resolution of a mile or so. By measuring the height and wavelength of ocean waves, the altimeters measure wind speeds and direction, and surface ocean currents and directions.
    • Ultrasound (acoustic) and radar tide gauges measure sea level, tides and wave direction in coastal and offshore tide gauges. (LIDAR) is well known in examples of weapon ranging, laser illuminated homing of projectiles. LIDAR is used to detect and measure the concentration of various chemicals in the atmosphere, while airborne LIDAR can be used to measure the heights of objects and features on the ground more accurately than with radar technology. Vegetation remote sensing is a principal application of LIDAR. and photometers are the most common instrument in use, collecting reflected and emitted radiation in a wide range of frequencies. The most common are visible and infrared sensors, followed by microwave, gamma-ray, and rarely, ultraviolet. They may also be used to detect the emission spectra of various chemicals, providing data on chemical concentrations in the atmosphere.
    • Radiometers are also used at night, because artificial light emissions are a key signature of human activity. [11] Applications include remote sensing of population, GDP, and damage to infrastructure from war or disasters.
    • Radiometers and radar onboard of satellites can be used to monitor volcanic eruptions [12][13] has been reported to be useful for target tracking purposes by researchers at the U.S. Army Research Laboratory. They determined that manmade items possess polarimetric signatures that are not found in natural objects. These conclusions were drawn from the imaging of military trucks, like the Humvee, and trailers with their acousto-optic tunable filter dual hyperspectral and spectropolarimetric VNIR Spectropolarimetric Imager. [14][15] of aerial photographs have often been used to make topographic maps by imagery and terrain analysts in trafficability and highway departments for potential routes, in addition to modelling terrestrial habitat features. [16][17][18]
    • Simultaneous multi-spectral platforms such as Landsat have been in use since the 1970s. These thematic mappers take images in multiple wavelengths of electromagnetic radiation (multi-spectral) and are usually found on Earth observation satellites, including (for example) the Landsat program or the IKONOS satellite. Maps of land cover and land use from thematic mapping can be used to prospect for minerals, detect or monitor land usage, detect invasive vegetation, deforestation, and examine the health of indigenous plants and crops (satellite crop monitoring), including entire farming regions or forests. [4][1] Prominent scientists using remote sensing for this purpose include Janet Franklin and Ruth DeFries. Landsat images are used by regulatory agencies such as KYDOW to indicate water quality parameters including Secchi depth, chlorophyll density, and total phosphorus content. Weather satellites are used in meteorology and climatology. produces an image where each pixel has full spectral information with imaging narrow spectral bands over a contiguous spectral range. Hyperspectral imagers are used in various applications including mineralogy, biology, defence, and environmental measurements.
    • Within the scope of the combat against desertification, remote sensing allows researchers to follow up and monitor risk areas in the long term, to determine desertification factors, to support decision-makers in defining relevant measures of environmental management, and to assess their impacts. [19]

    Geodetic Edit

      remote sensing can be gravimetric or geometric. Overhead gravity data collection was first used in aerial submarine detection. This data revealed minute perturbations in the Earth's gravitational field that may be used to determine changes in the mass distribution of the Earth, which in turn may be used for geophysical studies, as in GRACE. Geometric remote sensing includes position and deformation imaging using InSAR, LIDAR, etc. [20]

    Acoustic and near-acoustic Edit

      : passive sonar, listening for the sound made by another object (a vessel, a whale etc.) active sonar, emitting pulses of sounds and listening for echoes, used for detecting, ranging and measurements of underwater objects and terrain. taken at different locations can locate and measure earthquakes (after they occur) by comparing the relative intensity and precise timings. : Ultrasound sensors, that emit high-frequency pulses and listening for echoes, used for detecting water waves and water level, as in tide gauges or for towing tanks.

    To coordinate a series of large-scale observations, most sensing systems depend on the following: platform location and the orientation of the sensor. High-end instruments now often use positional information from satellite navigation systems. The rotation and orientation are often provided within a degree or two with electronic compasses. Compasses can measure not just azimuth (i. e. degrees to magnetic north), but also altitude (degrees above the horizon), since the magnetic field curves into the Earth at different angles at different latitudes. More exact orientations require gyroscopic-aided orientation, periodically realigned by different methods including navigation from stars or known benchmarks.

    The quality of remote sensing data consists of its spatial, spectral, radiometric and temporal resolutions.

    Spatial resolution The size of a pixel that is recorded in a raster image – typically pixels may correspond to square areas ranging in side length from 1 to 1,000 metres (3.3 to 3,280.8 ft). Spectral resolution The wavelength of the different frequency bands recorded – usually, this is related to the number of frequency bands recorded by the platform. Current Landsat collection is that of seven bands, including several in the infrared spectrum, ranging from a spectral resolution of 0.7 to 2.1 μm. The Hyperion sensor on Earth Observing-1 resolves 220 bands from 0.4 to 2.5 μm, with a spectral resolution of 0.10 to 0.11 μm per band. Radiometric resolution The number of different intensities of radiation the sensor is able to distinguish. Typically, this ranges from 8 to 14 bits, corresponding to 256 levels of the gray scale and up to 16,384 intensities or "shades" of colour, in each band. It also depends on the instrument noise. Temporal resolution The frequency of flyovers by the satellite or plane, and is only relevant in time-series studies or those requiring an averaged or mosaic image as in deforesting monitoring. This was first used by the intelligence community where repeated coverage revealed changes in infrastructure, the deployment of units or the modification/introduction of equipment. Cloud cover over a given area or object makes it necessary to repeat the collection of said location.

    In order to create sensor-based maps, most remote sensing systems expect to extrapolate sensor data in relation to a reference point including distances between known points on the ground. This depends on the type of sensor used. For example, in conventional photographs, distances are accurate in the center of the image, with the distortion of measurements increasing the farther you get from the center. Another factor is that of the platen against which the film is pressed can cause severe errors when photographs are used to measure ground distances. The step in which this problem is resolved is called georeferencing and involves computer-aided matching of points in the image (typically 30 or more points per image) which is extrapolated with the use of an established benchmark, "warping" the image to produce accurate spatial data. As of the early 1990s, most satellite images are sold fully georeferenced.

    In addition, images may need to be radiometrically and atmospherically corrected.

    Radiometric correction Allows avoidance of radiometric errors and distortions. The illumination of objects on the Earth's surface is uneven because of different properties of the relief. This factor is taken into account in the method of radiometric distortion correction. [21] Radiometric correction gives a scale to the pixel values, e. g. the monochromatic scale of 0 to 255 will be converted to actual radiance values. Topographic correction (also called terrain correction) In rugged mountains, as a result of terrain, the effective illumination of pixels varies considerably. In a remote sensing image, the pixel on the shady slope receives weak illumination and has a low radiance value, in contrast, the pixel on the sunny slope receives strong illumination and has a high radiance value. For the same object, the pixel radiance value on the shady slope will be different from that on the sunny slope. Additionally, different objects may have similar radiance values. These ambiguities seriously affected remote sensing image information extraction accuracy in mountainous areas. It became the main obstacle to the further application of remote sensing images. The purpose of topographic correction is to eliminate this effect, recovering the true reflectivity or radiance of objects in horizontal conditions. It is the premise of quantitative remote sensing application. Atmospheric correction Elimination of atmospheric haze by rescaling each frequency band so that its minimum value (usually realised in water bodies) corresponds to a pixel value of 0. The digitizing of data also makes it possible to manipulate the data by changing gray-scale values.

    Interpretation is the critical process of making sense of the data. The first application was that of aerial photographic collection which used the following process spatial measurement through the use of a light table in both conventional single or stereographic coverage, added skills such as the use of photogrammetry, the use of photomosaics, repeat coverage, Making use of objects' known dimensions in order to detect modifications. Image Analysis is the recently developed automated computer-aided application that is in increasing use.

    Object-Based Image Analysis (OBIA) is a sub-discipline of GIScience devoted to partitioning remote sensing (RS) imagery into meaningful image-objects, and assessing their characteristics through spatial, spectral and temporal scale.

    Old data from remote sensing is often valuable because it may provide the only long-term data for a large extent of geography. At the same time, the data is often complex to interpret, and bulky to store. Modern systems tend to store the data digitally, often with lossless compression. The difficulty with this approach is that the data is fragile, the format may be archaic, and the data may be easy to falsify. One of the best systems for archiving data series is as computer-generated machine-readable ultrafiche, usually in typefonts such as OCR-B, or as digitized half-tone images. Ultrafiches survive well in standard libraries, with lifetimes of several centuries. They can be created, copied, filed and retrieved by automated systems. They are about as compact as archival magnetic media, and yet can be read by human beings with minimal, standardized equipment.

    Generally speaking, remote sensing works on the principle of the inverse problem: while the object or phenomenon of interest (the state) may not be directly measured, there exists some other variable that can be detected and measured (the observation) which may be related to the object of interest through a calculation. The common analogy given to describe this is trying to determine the type of animal from its footprints. For example, while it is impossible to directly measure temperatures in the upper atmosphere, it is possible to measure the spectral emissions from a known chemical species (such as carbon dioxide) in that region. The frequency of the emissions may then be related via thermodynamics to the temperature in that region.

    Data processing levels Edit

    To facilitate the discussion of data processing in practice, several processing "levels" were first defined in 1986 by NASA as part of its Earth Observing System [22] and steadily adopted since then, both internally at NASA (e. g., [23] ) and elsewhere (e. g., [24] ) these definitions are:

    Level Description
    0 Reconstructed, unprocessed instrument and payload data at full resolution, with any and all communications artifacts (e. g., synchronization frames, communications headers, duplicate data) removed.
    1a Reconstructed, unprocessed instrument data at full resolution, time-referenced, and annotated with ancillary information, including radiometric and geometric calibration coefficients and georeferencing parameters (e. g., platform ephemeris) computed and appended but not applied to the Level 0 data (or if applied, in a manner that level 0 is fully recoverable from level 1a data).
    1b Level 1a data that have been processed to sensor units (e. g., radar backscatter cross section, brightness temperature, etc.) not all instruments have Level 1b data level 0 data is not recoverable from level 1b data.
    2 Derived geophysical variables (e. g., ocean wave height, soil moisture, ice concentration) at the same resolution and location as Level 1 source data.
    3 Variables mapped on uniform spacetime grid scales, usually with some completeness and consistency (e. g., missing points interpolated, complete regions mosaicked together from multiple orbits, etc.).
    4 Model output or results from analyses of lower level data (i. e., variables that were not measured by the instruments but instead are derived from these measurements).

    A Level 1 data record is the most fundamental (i. e., highest reversible level) data record that has significant scientific utility, and is the foundation upon which all subsequent data sets are produced. Level 2 is the first level that is directly usable for most scientific applications its value is much greater than the lower levels. Level 2 data sets tend to be less voluminous than Level 1 data because they have been reduced temporally, spatially, or spectrally. Level 3 data sets are generally smaller than lower level data sets and thus can be dealt with without incurring a great deal of data handling overhead. These data tend to be generally more useful for many applications. The regular spatial and temporal organization of Level 3 datasets makes it feasible to readily combine data from different sources.

    While these processing levels are particularly suitable for typical satellite data processing pipelines, other data level vocabularies have been defined and may be appropriate for more heterogeneous workflows.


    Author information

    Affiliations

    Key Laboratory of Geographic Information Science, Ministry of Education, East China Normal University, Joint Laboratory for Environmental Remote Sensing and Data Assimilation, ECNU & CEODE, CAS, Shanghai, 200062, China

    Youzhi An, Wei Gao, Chaoshun Liu & Runhe Shi

    Natural Resource Ecology Laboratory, Colorado State University, Fort Collins, CO, 80523, USA

    Yantai Institute of Coastal Zone Research, Chinese Academy of Sciences, Yantai, 264003, China

    You can also search for this author in PubMed Google Scholar

    You can also search for this author in PubMed Google Scholar

    You can also search for this author in PubMed Google Scholar

    You can also search for this author in PubMed Google Scholar

    You can also search for this author in PubMed Google Scholar

    Corresponding author


    Remote sensing and geographic information systems techniques in studies on treeline ecotone dynamics

    We performed a meta-analysis on over 100 studies applying remote sensing (RS) and geographic information systems (GIS) to understand treeline dynamics. A literature search was performed in multiple online databases, including Web of Knowledge (Thomson Reuters), Scopus (Elsevier), BASE (Bielefeld Academic Search Engine), CAB Direct, and Google Scholar using treeline-related queries. We found that RS and GIS use has steadily increased in treeline studies since 2000. Spatial-resolution RS and satellite imaging techniques varied from low-resolution MODIS, moderate-resolution Landsat, to high-resolution WorldView and aerial orthophotos. Most papers published in the 1990s used low to moderate resolution sensors such as Landsat Multispectral Scanner and Thematic Mapper, or SPOT PAN (Panchromatic) and MX (Multispectral) RS images. Subsequently, we observed a rise in high-resolution satellite sensors such as ALOS, GeoEye, IKONOS, and WorldView for mapping current and potential treelines. Furthermore, we noticed a shift in emphasis of treeline studies over time: earlier reports focused on mapping treeline positions, whereas RS and GIS are now used to determine the factors that control treeline variation.


    NAIP, Landsat & MODIS

    In this week’s class, you will look at 2 types of spectral remote sensing data:

    Next week you will work with MODIS data.

    About NAIP Multispectral Imagery

    NAIP imagery is available in the United States and typically has three bands - red, green and blue. However, sometimes, there is a 4th near-infrared band available. NAIP imagery typically is 1m spatial resolution, meaning that each pixel represents 1 meter on the Earth’s surface. NAIP data is often collected using a camera mounted on an airplane and is collected for a given geographic area every few years.

    Landsat 8 Imagery

    Compared to NAIP, Landsat data are collected using an instrument mounted on a satellite which orbits the globe, continuously collecting images. The Landsat instrument collects data at 30 meter spatial resolution but also has 11 bands distributed across the electromagnetic spectrum compared to the 3 or 4 that NAIP imagery has. Landsat also has one panchromatic band that collects information across the visible portion of the spectrum at 15 m spatial resolution.

    Landsat 8 bands 1-9 are listed below:

    Landsat 8 Bands

    BandWavelength range (nm)Spatial Resolution (m)Spectral Width (nm)
    Band 1 - Coastal aerosol430 - 450302.0
    Band 2 - Blue450 - 510306.0
    Band 3 - Green530 - 590306.0
    Band 4 - Red640 - 670300.03
    Band 5 - Near Infrared (NIR)850 - 880303.0
    Band 6 - SWIR 11570 - 1650308.0
    Band 7 - SWIR 22110 - 22903018
    Band 8 - Panchromatic500 - 6801518
    Band 9 - Cirrus1360 - 1380302.0

    The bands for Landsat 7 (bottom) vs Landsat 8 (top). there are several other Landsat instruments that provide data - the most commonly used being Landsat 5 and 7. The specifications for each instrument are different. Source: USGS Landsat.

    MODIS Imagery

    The Moderate Resolution Imaging Spectrometer (MODIS) instrument is another satellite based instrument that continuously collects data over the Earth’s surface. MODIS collects spectral information at several spatial resolutions including 250m, 500m and 1000m. You will be working with the 500 m spatial resolution MODIS data in this class. MODIS has 36 bands however in class you will learn about only the first 7 bands.


    Developing indicators of ecosystem condition using geographic information systems and remote sensing

    Improvements in remote sensing technologies and the use of geographic information system (GIS), are increasingly allowing us to develop indicators that can be used to monitor and assess ecosystem condition and change at multiple scales. This paper presents global- and regional-level indicators developed by the World Resources Institute and collaborating partners using remote sensing and GIS. Presented as regional and global maps, these spatial indicators are ideal communication tools to raise awareness of the condition of the Earth’s ecosystems among different audiences. Global and regional spatial indicators not only inform us about the current condition of, and pressures on, ecosystems, but also about the likely capacity of the ecosystem to continue to provide goods and services to future generations. The increasing focus on integrating socio-economic and biological information with remote sensing and GIS technology can only help to further our understanding and capacity to manage ecosystems in a more sustainable manner.

    This is a preview of subscription content, access via your institution.


    20. Active Remote Sensing Systems

    The remote sensing systems you've studied so far are sensitive to the visible, near-infrared, and thermal infrared bands of the electromagnetic spectrum, wavelengths at which the magnitude of solar radiation is greatest. Quickbird, WorldView, Landsat and MODIS are all passive sensors that measure only radiation emitted by the Sun and reflected or emitted by the Earth.

    Although we used the common desktop document scanner as an analogy for remote sensing instruments throughout this chapter, the analogy is actually more apt for active sensors. That's because desktop scanners must actively illuminate the object to be scanned. Similarly, active airborne and satellite-based sensors beam particular wavelengths of electromagnetic energy toward Earth's surface, and then measure the time and intensity of the pulses' returns. Over the next couple of pages, we'll consider two kinds of active sensors: imaging radar and lidar.

    There are two main shortcomings to passive sensing of the visible and infrared bands. First, reflected visible and near-infrared radiation can only be measured during daylight hours. Second, clouds interfere with both incoming and outgoing radiation at these wavelengths. Though Lidar can be flown at night, it can't penetrate cloud cover.

    Longwave radiation, or microwaves, are made up of wavelengths between about one millimeter and one meter. Microwaves can penetrate clouds, but the Sun and Earth emit so little longwave radiation that it can't be measured easily at altitude. Active imaging radar systems solve this problem. Active sensors like those aboard the European Space Agency's ERS and Envisat, India's RISAT, and Canada's Radarsat, among others, transmit pulses of longwave radiation, then measure the intensity and travel time of those pulses after they are reflected back to space from the Earth's surface. Microwave sensing is unaffected by cloud cover, and can operate day or night. Both image data and elevation data can be produced by microwave sensing, as you'll see in the following page.


    Watch the video: How to Download MODIS data:2020