Influence of the scale factor on the projection

Influence of the scale factor on the projection

I'm currently reading documentation about map projections to understand the source code of the Proj4 project.

The scale factor is named in a variety of sources I read. This sources explained its definition and its value for some projections.

In the source code of Proj4, for the mercator projection (sphere and ellipse case), the scale factor influences the coordinates on the projection :

//P->k0 is the scale factor xy.x = P->k0 * lp.lam; xy.y = - P->k0 * log(pj_tsfn(lp.phi, sin(lp.phi), P->e));

Why and how I should use the scale factor during the computation of the projection ? Is there any valuable resources on the web ?

This question in asked in the sense of projection computation. I can find the formula for the inverse and forward projection as well as the scale factor in a various of resources, but no one explains how I should use both in a algorithm. You have the definition of the projection and the definition of the scale factor, but it's not clearly written that I should multiply or divide the result by the scale factor.

Is it a general rule : if I find the formula for any projection with the related scale factor, should I, in all cases, always divide or multiply the results by the scale factor ?

Unfortunately, the term "scale factor" is ambiguous. In cartography, maps and projections, the concept and application of scale is of fundamental importance. By definition, it is a factor - meaning it is something to multiply or divide - so whether "scale" or "factor" is the adjective the particular word pairing has no obvious meaning, except in a particular context:

Every map or globe has a (stated or unstated) scale

The map or globe scale is a ratio of distance on map or globe to corresponding distance on the ground or reality. Either it has no units or its units reflect the map and ground units - miles per inch, km per cm, etc. It is variously called scale, map scale, principal scale, representational fraction, or nominal scale. I like that last one, nominal scale, because most maps have a single statement of its scale. Sadly, it is sometimes also called scale factor.

Every map projection results in a continuous variation in scale

All map projections distort linear scale, all over the map. This distortion is almost always termed scale factor (and sometimes "projection scale factor" or "point scale factor"). At any point on the map, it is the ratio of the "true" (undistorted) scale and the "nominal" (distorted) scale. In other words, it is the ratio of the true ground distance to the implied distance on the map.

Scale and computing a map projection

When computing a map projection, that is

(X, Y) ← projection (λ, φ)

you need to have some constant which depends on the size of your region of interest, the size of your map, and the map units involved. That constant is our friend the nominal scale. Since you don't provide the full code, and it's a little cryptic, I cannot say for certain, but I suspect that is what is meant by "scale factor" in your particular problem.

According to Mathematics_of_the_Mercator_projection on wiki, the spherical case, which uses radius, R, as a substitute for scale:

X = R λ

Y = R ln [tan (π/4 + φ/2)]

(That looks similar to your code.)

How is radius a substitute for scale? Simple. It is the constant which determines the size of the map: a larger globe yields a larger map. If R is the Earth's radius, then your map scale will be one-to-one. If R is the radius of your globe in, say, mm, inches, or pixels, then X,Y will be in those same units and the map's nominal scale, NS, will be the ratio of your globe's radius to the Earth's radius:

NS = globe radius / Earth radius

To get ground distances from a measurement on a projected map, including any distortions:

ground distance = map distance / NS

To remove distortion, see below.

Assessing scale distortion of a map projection

To properly correct distortion at any point on a projected map, you ought to be able to calculate the distortion, SF (scale factor):

SF ← distortion (λ, φ)

In this case, SF has to be calculated, or provided in a look-up table, wherever it is needed. Does your code calculate "scale factor" as a function of "lam" and "phi"? I doubt it.

According to Mathematics_of_the_Mercator_projection on wiki, which uses K for scale factor:

Spherical case: K = sec φ

Ellipsoidal case: K = sec φ sqrt(1 - e2 sin2 φ)

where e2 is about 0.006 for all reference ellipsoids.

To correct for any projection distortion, i.e., to convert a projected map distance to get true (undistorted) globe distance, always divide by the scale factor, SF:

ground distance = map distance / SF

That might look familiar.

Are the constant nominal scale and the variable scale factor used in the same way?

Yes, they're used in exactly the same way. However, whenever the Earth is really projected onto a map that is actually measured in terms of map units (mm, cm, inches, pixels, etc.), then you need to apply both

  • a global nominal scale to get the correct globe/earth magnitude and units, and
  • a local scale factor to correct for projection distortion at any particular Earth position.

If you are not making measurement on an actual map, and you are just using coordinates that are in the same units as your Earth radius, R, then your nominal scale is trivial (NS = 1) and you only need to use the scale factor.

A 'scale factor' when specified in connection with a map projection algorithm is a way to reduce the overall distortion due to the map projection.

For instance, the transverse Mercator projection usually has these projection parameters:

central meridian (also known as longitude of origin) latitude of origin scale factor false easting false northing

It's a cylindrical projection where the cylinder is orientated east-west. That is, the waist of the cylinder corresponds with a meridian, or longitude line. So along that line, the scale is 1.0--no distortion. As you move away from the central meridian, distortion will increase.

One way to reduce the overall distortion is to apply a scale factor to all coordinates. In this case, it has the geometric effect of pushing the cylinder's surface below the central meridian, and you end up with two lines on either side of the central meridian that are roughly north-south that now have scale = 1.0. The central meridian's scale is now whatever the scale factor is. In a UTM zone, that's 0.9996. This is also called a secant case. If the scale factor is 1.0 on the central meridian, then it's a tangent case.

One (among many!) place that discusses all this and has pictures is the Map Projections page at ITC in the Netherlands.

Generally, the 'tangent' or normal coordinates are calculated for a map projection, then any scale factor is applied, then any false easting/false northing values are added.

Edit based on further information in the question

As Martin-F discusses, there's a difference between the projection parameter, "scale factor", and the "point scale" or "relative scale" that can be calculated at a point. The former affects the amount of distortion throughout the projected coordinate reference system. The latter is how you calculate what the relative distortion is at a point.

As an example, a UTM zone has a scale factor parameter of 0.9996. In a transverse Mercator projection, the central meridian would normally have a relative scale of 1.0 and the scale factor would be 1.0. In UTM, the central meridian now has a relative scale of 0.9996, so distances are 4 parts per 10000 too short. We could calculate the relative scale using a long equation on the ellipsoid, but on the sphere, it's

k = k0 / sqrt(1.0 - B*B); where B = cos(latitude)*sin(longitude - longitude0)

On the central meridian it becomes sin(0) = 0, so the entire B*B is 0 and you're left with k = k0.

I don't know if it would be helpful but you might want to look at John P. Snyder's Map Projections: a Working Manual (pdf here) and the Guidance Note 7-2 from IOGP's Geomatics committee (maintainers of the EPSG Geodetic Parameter Dataset). Both discuss the algorithms.

MEMS accelerometers, magnetometers and orientation angles

When it's necessary to evaluate the orientation angles of an object you may have the question — which MEMS sensor to choose. Sensors manufacturers provide a great amount of different parameters and it may be hard to understand if the sensor fit your needs.

Brief: this article is the description of the Octave/Matlab script which allows to estimate the orientation angles evaluation errors, derived from MEMS accelerometers and magnetometers measurements. The input data for the script are datasheet parameters for the sensors. Article can be useful for those who start using MEMS sensors in their devices. You can find the project on GitHub.

We are using the following conditions:

    We're going to estimate the attitude of the stationary object.

Proportion and Scale in Architecture

Proportion and scale are used extensively in architecture to create forms that are both functional and pleasing to the eye. Establishing a balance between the two is what separates great architecture from other types of structures.

Designers use scale to create compositions that are appropriate in size for the intended use and proportion them in such a way that the separate parts relate to one other, as well as the whole of the building, in a harmonious and rational manner.

The aim of building design is to create a composition that both inspires the senses and is, at the same time, organized in an identifiable and rational manner. While both scale and proportion impact the aesthetic quality of a building, they do so in different ways.

The Difference Between Proportion and Scale

Scale typically refers to the size of an object or form relative to a standard of reference. The standard of reference can be the overall composition or perhaps an adjacent form. However, scale is only a relative comparison of size.

Proportion, on the other hand, takes into account the proper or harmonious relationship between shapes relative to one another or to the composition as a whole. Proportion is concerned with both quality as well as the degree of emphasis or articulation.

In addition to design intent, there are other factors that can impact both scale and proportion. For example, the standard size of the materials used and structural requirement or constraints can also play a role.

Material Proportions

Materials, because of their inherent physical qualities have limitations in terms of size. Their strength, elastic qualities, hardness, and durability all limit how long, wide, or thick they can be before breaking or collapsing.

A material’s proportions are also regulated by how they react to forces of stress. Some materials, such as brick, for example, perform better under compression. Their size and mass are dimensioned in such a way as to take advantage of this natural quality.

Steel, on the other hand, is able to perform under both conditions of tension and compression. This provides the material with greater flexibility. It can be made into linear shapes such as columns and beams, but at the same time, it can also be made into sheets of metal. Regardless, there are limits to its size and shape determined by its point of failure.

Wood is a fairly flexible and elastic material, and can also be used for linear shapes such as posts and beams or as planar sheets.

Structural Proportions

Similar to building materials, structural members are proportioned based on their functional requirements and strength limitations.

A beam, for example, has significantly more depth proportionally to its width. This allows it to span greater distances and support more weight.

A column, on the other hand, increases in thickness relative to the amount of weight it needs to support. Both structural elements offer cues on the size and proportions of the spaces they occupy.

Manufactured Proportions

Architectural elements are proportioned not only based on structural limitations but can also be size based on manufacturing standards and norms.

These proportions are often dictated by convention, but other factors such as ease of transport and packaging efficiency can also come into play since the elements are produced in manufacturing plants.

Because these manufactured products function in combination with other elements of a building, often also manufactured in a plant, their size and proportions are also established based on their relationship to those other parts. How they fit in the overall assembly, for example.

An example of this are doors and windows, which are required to fit within modular masonry openings. Likewise, sheathing materials are sized to fit within the standard spacing requirements of wood and metal studs and joists.

Proportioning Systems

While the size of materials, structural, and other building elements are restricted by their ability to withstand natural forces, a building’s scale and proportions can also be determined by the designer.

A designer can choose to make a space taller or shorter or to give its footprint a square, rectangular, or circular shape. These decisions may be driven by individual design intent, but can also be derived from proportioning systems and methods generally accepted in building design.

These proportioning systems help unify the interior and exterior of a building and can provide a sense of order throughout the composition. Different shapes and forms can be unified with proportion to create a more harmonious relationship between each of the parts.

Throughout the course of history, proportioning systems have been used to create an aesthetic rationale that enhances the beauty of a building. Some common proportioning systems include:

  • Golden Section
  • Classical orders
  • Renaissance theories
  • Modular
  • Ken
  • Anthropometry
  • Scale

Golden Section

The golden section is a mathematically-based proportioning system used by ancient civilizations including the Greeks and Romans as well as by modern designers, most notably the French-Swiss architect Le Corbusier.

It is based on the notion that mathematical proportions, which are prevalent throughout the universe, have a harmonious structure to them. They can be described in terms of the parts relative to the whole and the algebraic formula: a/b = b/(a+b) = 0.618.

This proportioning system is used not only in architecture, but also in other creative fields such as art, graphic design, music, and can even be found abundantly in the structures of living organisms, including the human body.

Classical Orders

The classical orders used by the Greeks and Romans of ancient antiquity represented the perfect expression of beauty and harmony. They were not based on a fixed unit of measurement, but rather on the proportioning of the parts to the whole.

The basic unit of measurement was the diameter of a column. The dimensions of the shaft, the capital, pedestal, and entablature were all based on this starting dimension. The five classical orders, from least ornate to most ornate include:

Renaissance Theories

The design theories of the Renaissance period were deeply rooted in the Greek mathematical system of proportions. Pythagoras discovered, based on the consonances of the Greek musical system, that there was a simple numerical progression 1, 2, 3, 4, that could be expressed in ratios 1:2, 1:3, 2:3, 3:4 to create harmonious compositions.

Architects of the Renaissance believed that the spatial units of a building should be based on these mathematical formulas. One of the most influential architects of the Italian Renaissance was Andrea Palladio, who in his book The Four Books on Architecture laid out the system of proportioning that makes rooms beautiful and harmonious.

Palladio, for example, proposed methods for determining the height of rooms based on their length and width. A flat ceiling room, he suggested, should be as tall as it is wide. Square rooms with vaulted ceilings should be one-third taller than their width. For other rooms, Palladio used Pythagoras’s theory of means to establish their height.


The Modular proportioning system was made famous by the French-Swiss architect Le Corbusier. Based on the measuring concepts used by the Ancient Greeks, Egyptians, and other high civilizations, the Modular uses the mathematical proportions of the human body as a point of reference.

The Modular combines the aesthetic dimensions of the Golden Section and Fibonacci numerical series with the scale of the human body. Le Corbusier saw the Modular not only as a sequence of numbers but as a system of measurement that could dictate the size of volumes and surfaces based on a human scale.

The ken was introduced in the second half of Japan’s Middle Ages. It was originally used to designate intervals between columns and varied in size. However, it eventually became standardized as a standard unit of measurement for residential architecture.

The base unit of measurement is a 1:2 modular floor mat which can be configured in a variety of ways based on the number and placement of the mats used. The traditional mat size was determined by the size needed to sit two people, or for one person to lie on. However, as the use of the ken grid increased, it eventually became sized based on structural requirements, rather than on human dimensions.

The size of a room was determined by the number of ken used. The more mats used, the larger the room. The ceiling height, in turn, is derived from the number of mats x 0.3. The ken evolved from a unit of measurement for buildings to an aesthetic module that ordered the materials, space, and structure of Japanese architecture.


Anthropometry refers to the dimensioning of objects relative to the human body. While ancient cultures used human body proportions as a starting point for design and aesthetic considerations, it was used in a more theoretical manner. Anthropometry, on the other hand, is more practical and functional in its approach.

Anthropometry is based on the premise that the proportions of the human body affect the proportion of things that we handle. Limitations in terms of the height and distance of things we try to reach play a role in how we move and interact with our surroundings. The size of furnishings and the height at which architectural elements are placed reflect this notion.


Scale takes into account the size of things relative to other reference items. It can only be perceived when compared to something else. This reference point can be a standard that we have universally come to expect, or it can be an adjacent volume or space that serves as a relative scale.

Scale can also be perceived relative to the human body, as has been previously discussed. A room which is intimate can make us feel more comfortable and in control, while a room which is monumental in scale can make us feel small and insignificant.

Among the dimensions of a room, the height has a greater effect on scale than the width or length. In addition to the vertical dimension, other factors that affect scale are the color, shape, and pattern of the surrounding surfaces. Likewise, items placed within a room can impact the perception of scale.

Closing Notes

Proportioning systems have been used throughout the history of architecture to create both order and aesthetic beauty. The goal is always to strike the best balance between scale, that is the size of forms relative to others, and the harmonious relationship between the parts.

Both scale and proportion have shaped the course of architecture through the ages. Those buildings that we remember the most or that have been most cherished by historians have managed to find a unique harmony between these two elements of design.


The principles outlined in this article are derived from the illustrative works by Francis D.K. Ching. If you would like to read more on the topic and see the graphic illustrations that have made the book a classic among students of Architecture through the years, check out Architecture: Form, Space, and Order.

About Your Own Architect

Hi, I'm Gio Valle, creator of Your Own Architect. Whether you are looking to design and build your own house or simply doing some research, you've come to the right place. I developed this site in the hopes that it will provide you with valuable information and help answer your questions, so that you can create your perfect home.

Giovanni C Valle, AIA, LEED AP

I'm a registered architect in New York and a member of the American Institute of Architects.

Legal Info

This site is owned and operated by, LLC. Your Own Architect is a participant in the Amazon Services LLC Associates Program, an affiliate advertising program designed to provide a means for sites to earn advertising fees by advertising and linking to Your Own Architect also participates in affiliate programs with Clickbank and other sites. We are compensated for referring traffic and business to these companies.

The information contained in this website is provided for informational purposes only. All information on this site is presented without any representation, guaranty, or warranty whatsoever regarding the accuracy, relevance, or completeness of the information. The inclusion of links from this site does not imply endorsement or support of any of the linked information, services, products, or providers.


We use cookies to make our website and your experience effective, secure and user friendly. If you stay on this website, you agree to the use of cookies.
For more information, please review our Privacy Policy.

Toward a New Theory of Power Projection

Now that the pandemic crisis is hammering America’s finances, U.S. strategy risks veering even further into permanent insolvency. Even before the crisis, the military demands of an intense global competition with China, Russia, and secondary competitors like Iran and North Korea were becoming financially untenable. Now, the costs of the current crisis — in both the short and long term — are likely to lead to further cuts from the defense budget and may call into question the sustainability of major U.S. commitments. The United States is likely to soon be engaged in a painful exercise: undertaking a truly fundamental prioritization, identifying defense capabilities and commitments that can be abandoned, or pursued in more efficient ways, without undue risk. One item that needs to be on that list of priorities is expeditionary power projection.

Long-distance power projection — the ability to transport overwhelming air, sea, and land power to far-off places like Taiwan, Korea, or the Baltics and win decisively in major combat — exercises a predominant influence on U.S. defense policy. It generates the most demanding requirements for military capacity and capabilities, determines many systems the services buy, and shapes the concepts the services develop. It is no exaggeration to say that the U.S. military of today is largely built to project power in this way.

Yet, even before the current crisis, several powerful trends called for a fundamental reassessment of the way in which the United States projects power. The conventional method could be termed “expeditionary power projection — the strategy of stationing the bulk of the joint force in the United States and deploying them to distant locales to decisively defeat aggression. This approach is rapidly becoming obsolete. Picking up thousands of tons of mass and carrying it to a location on the other side of the world where an opponent has decisive operational advantages proved successful against second-tier powers like Iraq it will not be effective against either near-peer militaries like Russia and China or even a nuclear-armed North Korea. But that approach is only one way of solving the problem of long-distance deterrence and defense, and it is time for the United States to seek other ways of doing so. This essay briefly outlines several powerful and interconnected flaws in expeditionary power projectionand then articulates principles of a possible alternative concept.

We’ll Lose When We Get There

The most well-known and widely discussed operational flaw in expeditionary power projection is the so-called “anti-access/area denial problem” — the idea that Russian and Chinese anti-access and area denial capabilities can blunt the effects of U.S. military operations. Dozens of studies have argued that U.S. forces will be hard-pressed to operate effectively anywhere near the forward edge of the battle and will sustain significant losses in the attempt to get there. Meanwhile, North Korea has its own version of anti-access and area denial capabilities — an increasingly sophisticated missile force and nuclear deterrent. This situation is partly a function of new precision strike and sensing technologies being deployed by U.S. competitors but also of basic physics: Potential adversaries will be fighting very close to home and have decisive geographic advantages in any of these contingencies.

To be sure, America’s view of the anti-access/area denial problem may be disconnected from the actual strategy of U.S. rivals. Some analyses have questioned how effective some of these denial capabilities would be in practice. There are at least partial remedies to the anti-access/area denial challenge in terms of posture, concepts, and capabilities. If the anti-access/area denial problem poses the sole barrier to U.S. expeditionary power projection ambitions, the United States just might be able to surmount it. But it does not.

We Don’t Have the Lift to Get There

A second challenge is that the United States does not have nearly enough strategic lift to transport land forces — and the sustainment foundation for air units — to far-off fights in a timely manner. Airlift cannot haul enough weight while and most major sealift ships are in a reserve status and generally old, short of spare parts, and potentially unreliable. Without major recapitalization investments, sealift capacity will sharply decline after 2020. A devastating analysis contended that the U.S. sealift fleet could be a “single point of failure” for power projection missions.

In theory, the United States could buy itself out of this shortcoming. But, given increasing fiscal constraints, massive new investments in strategic lift seem unlikely. The United States will need months, therefore, to build up necessary forces in any threatened theater — and potential adversaries, who have closely studied U.S. operations in the Gulf and Iraq Wars, now aim to achieve their local objectives as quickly as possible. Lift shortfalls alone mean that an expeditionary approach to power projection, which assumes a long period of amassing forces in the region, is no longer a credible way of threatening responses to many cases of major aggression.

Forces in Transit Will Be Stymied or Wrecked

Units in transit to a distant war will also face an increasingly devastating gauntlet of attacks, fueled in part by the emerging revolution in unmanned and swarming systems, pervasive sensing, and artificial intelligence. The full maturation of the precision-weapons revolution — alongside the emergence of related technologies such as autonomy and artificial intelligence — is creating an unprecedentedly lethal battlefield environment. These trends apply to movement across oceans and even airways: As James Lacey recently argued in War on the Rocks, “The oceans, never a hospitable environment, are increasingly deadly, to the point where the survivability of independently operating naval task forces are in question.”

In a future regional conflict as U.S. forces steam or fly toward a battle, an adversary could employ semi-autonomous unmanned aircraft, drone submersibles, small vessels, and smart mines to hammer the air and sea convoys. Attack submarines could decimate them with torpedoes and cruise missiles while bombers shoot long-range fire-and-forget weapons from hundreds of miles away. Clouds of swarming, tiny unmanned aerial systems could emerge from surfaced submarines or passing aircraft and descend on transport ships and their escorts — or even intercept slow-moving transport aircraft. Cyber operations will scramble the information systems and controls of U.S. vessels and create logistical chaos in ports. An aggressor could use direct attacks on space assets and cyber operations to disrupt communications and navigation, including GPS guidance. Forces that make it to their destination will then face crippling logistics shortfalls and disruptive attacks within theaters. Meanwhile, aggressors will surely threaten allies and partners with economic, cyber, or military attacks to ensure that they deny U.S. forces access to critical bases, staging facilities, and even airspace.

In the perpetual contest between offense and defense, the United States will develop answers to some of these risks. Directed energy weapons, for example, are being investigated as a possible answer to drone swarms. But, the emerging era of massed strikes will inescapably boost an aggressor’s ability to degrade U.S. forces in transit.

Meddling in the U.S. Homeland Will Disrupt Mobilization

Those flaws in power projection are joined by a newer challenge associated with emerging information tools and technologies that have the potential to stymie the domestic foundation for projecting power — a danger partly embodied by what a new RAND report calls “virtual societal warfare.” As advanced societies become increasingly dependent on information networks, algorithmic decision-making and a super-integrated “Internet of Things,” and as the ability to manipulate truth becomes more prevalent and powerful, the potential for an outside actor to create mischief will be very great. An aggressor could generate widespread confusion and chaos in ways that would be especially problematic for strategies of expeditionary power projection, including targeting mobilization and logistics systems in the United States.

Such a campaign might begin with an effort to prevent power projection from happening in the first place. Over social media and via “deep fake” video and audio, aggressors will seek to muddy the facts at issue and weaken the basis for a response. The resulting ambiguity could create a window of uncertainty — from a few days to a week or more — in which the United States and others might hesitate to respond. Such hesitation is especially problematic regarding expeditionary forms of power projection that demand that the United States start and sustain force flow in a timely manner.

If the United States goes ahead with plans to deploy forces, the aggressor could then undertake more hostile forms of disruption. The aggressor could launch ransomware attacks on U.S. municipalities like the attack that recently caused New Orleans to declare a state of emergency, dislocating the delivery of public services. It could use social media tools to foment protests and opposition to the war.

If those efforts failed to deter a U.S. president from beginning force flow, escalating attacks could focus more precisely on U.S. mobilization and logistics capabilities, including the disruption of military units as they leave a garrison or base. Some of these attacks would focus on traditional critical infrastructure targets such as energy and telecommunications networks. However, in a new era of more personalized and generalized virtual societal warfare, an aggressor could become more precise, emptying the bank accounts of service members and their families, issuing fake warrants for the arrest of their children, bringing havoc to the “Internet of Things” in their homes, and broadcasting verbal warnings from their Alexa or Siri speakers.

We cannot know in advance just how crippling these virtual attacks would be. Societies and militaries are resilient. Even today, in the midst of the pandemic, the United States military could — with significant risk — undertake large-scale power projection missions. But, even partly effective homeland-disrupting campaigns pose challenges for expeditionary models of power projection: The time, domestic logistical effort, and political will needed to gather forces and deploy them thousands of miles all provide time for an aggressor to weaken the national consensus behind such a response as well as the physical processes needed to accomplish it.

In fact, the risk of such attacks extends beyond the direct adversary in any future conflict. Multiple U.S. rivals could gang up in a crisis or war to impose even greater levels of disruption. In a war with China, for example, Russia, Iran, North Korea, and others — even individuals or non-governmental networks — might see a golden opportunity to unleash cyber and information warriors to impede the U.S. response and deal a decisive blow to the U.S. reputation for military primacy. The primary aggressor could also employ such actors as surrogates. A future U.S. effort to dispatch a classic expeditionary power projection effort could trigger a whole range of disruptive attacks.

Toward a New Approach

These threats to expeditionary power projection are not new. In fact, U.S. military services and other parts of the U.S. government are working on ways to mitigate them. Yet, given the unavoidable geographic asymmetry and current trends in precision weaponry, unmanned systems, and information networks, it seems increasingly dangerous to assume that the United States can credibly threaten to project expeditionary power over trans-oceanic distances to the doorstep of other major powers and “win” extended, large-scale conflicts at an acceptable cost. The question of what promises the United States continues to make in the most demanding power projection cases is beyond the scope of this essay. But, if it does intend to continue serving as a backstop deterrent to major aggression in far-off contingencies, it will need a new approach. Such an alternative could have three primary elements: forward-deployed or long-distance strike capabilities to degrade invading forces concepts for creating the prospect of a prolonged resistance even if the aggression achieves some goals and ways of imposing costs on an aggressor across multiple domains beyond military operations.

An initial step would be to threaten credible local military effects without transporting large U.S. forces to the battle area. This step could include helping potential targets of aggression make themselves less vulnerable in part by taking advantage of the same sorts of emerging technologies that threaten expeditionary models of power projection. The United States could help partners and allies develop autonomous swarming systems, smart mines, and cheap, anti-armor and anti-ship missiles to disrupt and wear down an invasion force. T.X. Hammes has made a compelling argument for the value of such technologies in the hands of U.S. allies and partners. The United States could also conduct train-and-advise missions to help build effective reserve forces capable of operating these systems. Additionally, it could aid allies and partners in developing powerful cyber capabilities to disrupt the homeland of an aggressor and its own power projection activities — including the sort of comprehensive virtual societal warfare attacks discussed above.

In terms of its own military role in the initial fight, the United States could focus on ways to impose costs on an initial attack without relying on the long-distance deployment of major combat elements. This path would not presume an ability to forward-deploy a significant number of additional heavy combat units — which is both politically infeasible and strategically provocative in most cases— but would, instead, mark an effort to use innovative approaches to dispersed firepower to achieve deterrent effects. The sinews of such a revised approach are emerging in embryonic form in a range of widely-discussed concepts that envision resilient networks of somewhat self-organizing nodes of mostly forward-deployed fighting power to bring firepower to bear on aggressive forces. Such a network could be supported by select types of long-range strike systems, including cyber, space, long-range bombers and missiles, and limited, stealthy maritime and air assets.

In support of this emerging vision of distributed firepower, a modified U.S. approach to power projection would invest in larger numbers of various precision weapons capable of penetrating contested airspace. It would accelerate the research and deployment of swarming and unmanned systems that do not rely on airfields for operation. In a maritime theater like the Pacific, it would focus in part on stealthy and submersible platforms on regular local patrol. It would experiment with multiple new force designs similar to but well beyond what the Army is beginning to do with its Multi-Domain Task Forces.

Having laid the groundwork to be able to impose costs on aggression without large-scale force movement, the United States would then work with allies and partners on the second element of a revised approach: ensuring that any resistance would be prolonged, confronting an aggressor with the potential of an extended fight. The United States could help partner nations build the capabilities for long-term resistance, including well-equipped reserves trained for insurgency large magazines of cheap, simple rockets and missiles as well as hidden 3D printing facilities to churn more out stealthy underground reservoirs capable of releasing swarms of attacking drones on time-delayed schedules and cyber units based around the world that are capable of launching crippling attacks even if their homeland was overrun. The United States could also pre-set, and then directly support, a potent civil resistance to complement a military insurgency.

When the Soviet Union invaded Afghanistan in 1979, the United States declared the attack illegitimate and sought to reverse it — in part with economic and political penalties but without any military “power projection” beyond aid to the Afghan resistance. The analogy is not exact, but a new approach could search for supercharged versions of a very similar strategy — one that threatens an aggressor with a long and debilitating campaign rather than a quick and painless fait accompli.

Finally, the third component of a revised strategy for power projection would involve a comprehensive global campaign to harass an aggressor’s world-wide interests. This third component — a cross-domain, holistic, non-kinetic, or “unrestricted” approach to power projection — would not involve U.S. attacks on aggressor military forces far from the area of aggression, but would employ non-military, often non-kinetic means to impose economic, political, and social costs. The aggressor state’s companies would see their activities embargoed or disrupted with electronic or regulatory means movements protesting or launching political harassment of the aggressor’s local activities could be funded and empowered. More ambitiously, the United States could threaten forms of economic strangulation, employing elements of what T.X. Hammes has called “offshore control” and Mike Pietrucha has termed a “strategic interdiction strategy” — taking advantage of an aggressor’s dependence on important exports of materials, energy, and supply chains to interdict its maritime shipping and potentially other sources of trade. Such large-scale interdiction efforts would have to be planned in advance, including agreements from other nations to play roles in the effort, but neither the threats nor the agreements would need to be made public.

Such a campaign would also incorporate a multilateral effort to wreck the aggressor’s geopolitical legitimacy and influence. This effort could comprise everything from U.N. resolutions to expelling ambassadors to a coordinated multilateral campaign to encourage nations to clamp down on its political and cultural influence tools to global bans on broadcasting by the attacker’s state media. On its own, such reputational punishment cannot be expected to deter military action. Yet, Russia and especially China care deeply about being accepted as legitimate great powers, and the prospect of a far more fundamental expulsion from the world community would not be treated lightly.

Taken together, these three components would add up to a new concept of projecting power and, by extension, achieving deterrence in distant locations. Its objective would be to demonstrate to a potential attacker that large-scale aggression would be ruinously costly to their society as well as indirectly threatening to the stability of their regime. This perspective would have clear implications for defense policy and investment — for example, encouraging a partial shift in the balance between emphasis on heavy, contiguous U.S.-based joint forces and more dispersed, forward-based, cutting-edge technologies and unit types as well as funds to support allied and partner acquisition of capabilities central to this approach. The U.S. Marine Corps’ new force design guidance provides a good example of the scale of rethinking that will be required.

The era of expeditionary power projection dominance is gone, at least as assumed by the traditional model. Pretending otherwise will continue to waste resources, skew the investments and concepts of the services, and, if war does occur, risk early defeat and/or catastrophic escalation. The U.S. effort to support the deterrence of a major war has played an important role in sustaining peace since 1945 and can continue to do so — but it is time for a major shift in how the United States plans to fulfill this critical military mission.

Michael J. Mazarr is a senior political scientist at the nonprofit, nonpartisan RAND Corporation. The views expressed here are his own.

Distance Education

While all courses and programs listed are offered in traditional face-to-face on campus teaching, the department offers select programs through fully online modules. These online programs include an online version of our Geography Minor, GeoManagement Undergraduate Certificate, and an online version of the Geospatial Intelligence Graduate Certificate.

Courses Available Online

Individual courses which are currently available online (in addition to their traditional delivery modes) are:

Course List
Code Title Credits
GGS 101Major World Regions (Mason Core)3
GGS 102Physical Geography (Mason Core)3
GGS 103Human Geography (Mason Core)3
GGS 110Introduction to Geoinformation Technologies3
GGS 121Dynamic Atmosphere and Hydrosphere (Mason Core)4
GGS 122Dynamic Geosphere and Ecosphere4
GGS 210Introduction to Spatial Computing3
GGS 300Quantitative Methods for Geographical Analysis3
GGS 302Global Environmental Hazards3
GGS 303Geography of Resource Conservation (Mason Core)3
GGS 310Cartographic Design3
GGS 311Geographic Information Systems3
GGS 312Physical Climatology3
GGS 315Geography of the United States3
GGS 317Geography of China (Mason Core)3
GGS 379Remote Sensing3
GGS 380Geography of Virginia3
GGS 416Satellite Image Analysis3
GGS 462Web-based Geographic Information Systems3
GGS 553Geographic Information Systems3
GGS 650Introduction to GIS Algorithms and Programming3
GGS 680Earth Image Processing3
GGS 681Social Media Analysis3
GGS 692Web-based Geographic Information Systems3

Influence of the scale factor on the projection - Geographic Information Systems

However, a map in a GIS can be shrunk or enlarged at will on the screen or on paper. You can zoom in until the screen displays a square metre or less, or zoom out until the screen displays all of BC. This means that geographic data in a GIS doesn't really have a 'map scale'.

    The amount of detail. The map must not be overwhelmed with detail, and become too crowded.

If you put a 1:20,000 scale paper map on an reducing photocopier, you can make it into a 1:100,000 map (ie reduce it by a factor of 5). However, probably areas of detail will be merged into big black blobs, and most of the text on the map will be too small to read.

A GIS map's annotation (ie text and symbols) must be designed with a display scale, just like a paper map. There is a range of scale in which it will 'look right', even though it is possible to display it at other scales with the GIS software.

  • What features have been omitted ?
  • What non-existent features are represented ?
  • How correct is their classification ?
  • How current is the data ?
  • How far away is a map feature from its actual location in the world ?

A rigorous statement of accuracy will include statistical measures of uncertainty and variation, as well as how and when the information was collected. Spatial data accuracy is independent of map scale and display scale, and should be stated in ground measurement units.

Generally, a line cannot be drawn much narrower than about 1/2 a millimetre. Therefore, on a 1:20,000 scale paper map, the minimum distance which can be represented (resolution) is about 10 metres. On a 1:250,000 scale paper map, the resolution is 125 metres.

However, most GIS store locations in ground units (eg UTM coordinates, or Longitude/Latitude) with a resolution of a centimetre or less. This resolution is far greater than the uncertainty of any of BC Environment's data.

If a raster coverage is derived from vector linework, its pixels should not be smaller than the uncertainty in the linework. If it comes from an air-photo or satellite image, its pixels should not be smaller than the resolution of the camera that recorded it.

The density of paper map's data is limited by its scale (and therefore its resolution). Areas (polygons) cannot be shown if they are smaller than the lines which draw them. For example, a polygon less than 250 metres wide cannot be drawn on a 1:250,000 scale map. This minimum size also limits the number of polygons that can be represented in a given area of a paper map.

A GIS stores its data digitally, so the minimum size of a feature is limited only by the resolution, which is effectively infinitesimal. Where the degree of detail in a coverage is arbitrary (eg soil polygons), a data definition or convention should specify the minimum size of features, and therefore their density. Without this, different parts of the same coverage may have widely varying degrees of detail, influencing analysis results.

A GIS stores lines (eg, a lake shoreline) as a sequence of point locations, and draws it with the edges that join them. There is no limit to how many points can be stored, or how close together they may be.

The amount of detail on line features should be limited just like data density. It does not make sense to store points at intervals which are shorter than the accuracy of their locations.

Some operations may result in features which are smaller than the data uncertainty. For example, overlaying rivers and forest polygons may create 'slivers' along the riverbanks which are 10 metres wide, when the uncertainty of the data is 20 metres. These slivers should be ignored, or included with their neighbours before the results of the overlay are used for further analysis.

For example, management unit boundaries could be stored in one provincial coverage, and annotation layers could be developed for labelling them at display scales of 1:20,000, 1:250,000, and 1:2,000,000.

If done carefully, this avoids duplication of the same data for display at different scales.

For example, a detailed forest cover map may be generalized by combining polygons with similar characteristics. This reduces the number of objects in the coverage.

Conversely, a detailed ecosystem classification map may be generalized by reducing the amount of detail in the boundaries between regions, without reducing the number of regions.

Generalizing a raster image usually reduces both the number of objects, and the amount of detail.

A GIS coverage should be identified by its accuracy (or uncertainty) and data density (or minimum feature size).


While there is a growing literature on barriers and facilitators to scaling up health interventions in resource-constrained settings ( Hanson et al., 2003 Smith et al., 2015), this study makes a unique contribution by considering the relative role that factors play in the two dimensions of scale up defined by the WHO/ExpandNet model—institutionalization or vertical scale up, and expansion or horizontal scale up ( WHO and ExpandNet, 2010). As a result, the lessons from the scale up of CHX in Bangladesh have implications for planning and studying scale up in low-income settings across interventions. Facilitators that are not specific to any intervention and that primarily influenced institutionalization of CHX in Bangladesh included external policies (e.g. WHO guidelines) and support from a collaborative partnership (the Chlorhexidine Working Group). A study of the Chlorhexidine Working Group identified several factors that contributed to its success, including strong leadership and communications, clear terms of reference and adequate resource allocation ( Coffey et al., 2018). At the country level, engagement of stakeholders in policy and guideline development was an important institutionalization facilitator. National implementers can plan a scale-up process with early engagement of a variety of stakeholders, as is supported by other research findings and scale-up guides ( WHO and ExpandNet, 2010 Story et al., 2017). Bangladesh’s NTWC for Newborn Health provides an excellent model for meaningfully engaging a broad group of stakeholders to foster the political commitment and support necessary for scale up ( Edwards, 2010).

While stakeholder-led policy adoption and guideline development were strengths of Bangladesh’s scale-up process, significant challenges were faced with institutionalizing the new policy in systems required for service delivery within the two government health directorates—particularly regarding plans and systems for commodity procurement, distribution and performance monitoring. Fragmented health systems complicate and expand the efforts required for institutionalization, and performance may be variable between different public systems, as seen in other countries ( Billings et al., 2007). Furthermore, while the scale-up team made efforts to promote institutionalization in the non-profit and private sectors—a significant source of care in Bangladesh and many LIC ( Wang et al., 2011)—the government has limited regulatory control over these sectors, another common barrier ( Hanson et al., 2003). Bangladesh’s experiences suggest that scale-up planners should assess and tailor plans for achieving institutionalization according to the structure of the health system and sources of care. For example, during Nigeria’s scale up of CHX, the decentralized procurement in its health system meant that substantial advocacy was needed for state governments to begin allocating funds to purchase CHX ( Nigeria Federal Ministry of Health and the Maternal and Child Survival Program, 2018). Scale-up benchmarks are useful approaches for identifying key institutionalization changes needed for scale up ( Moran et al., 2012) and should be adapted to reflect the full structure of the country’s health system and each major organization where changes are needed.

Following institutionalization, the principal barriers to expansion of CHX in Bangladesh were inadequate procurement and weaknesses in the supply chain and monitoring systems. Drug and commodity supply chain weaknesses are a well-documented challenge in the public health systems of many LIC ( Cameron et al., 2009 Dickson et al., 2014 Yadav, 2015) that can negatively affect the quality and impact of public health programmes ( Pasquet et al., 2010). Insufficient procurement was a major reason that Bangladesh’s public health system did not extend CHX distribution through community health workers, as recommended in the national guidelines [ National Technical Working Committee on Newborn Health (Bangladesh), 2014]. Accurate quantification and budgeting are required for procurement to meet the needs when expanding new interventions. Following procurement, maintaining the supply of a new drug and commodity depends on the functioning of the existing logistics systems unless the scale up includes investments in strengthening the existing system or establishing parallel systems, which have demonstrated inefficiencies and negative effects ( Rudge et al., 2010 Windisch et al., 2011). Supporting and strengthening logistics systems requires considerable effort and may best be accomplished through partnerships across health initiatives and engagement of technical expert groups ( Dickson et al., 2014).

Our findings also demonstrate that there are opportunities for scale-up resource teams to learn from peripheral adaptations at district and facility level, such as the innovative approaches some facilities took to provide CHX during stock outs. Although fidelity to the core components of an intervention is considered critical, the scale-up literature also recognizes that adaptations at subnational levels are often required for successful scale up ( Simmons and Shiffman, 2007 Chambers and Norton, 2016 Kemp, 2016). Even in the absence of planned adaptations, theories of complex adaptive systems and street-level bureaucracy suggest that local adaptations will develop spontaneously, through dynamic interaction processes like self-organization and sense-making ( Lipsky, 1980 Paina and Peters, 2012 Lanham et al., 2013 Gilson et al., 2014). Local adaptations may be helpful (e.g. solutions to local problems that increase healthcare quality and access) or harmful (e.g. short cuts or misinterpretations of policy that diminish quality and access). Although the need for, and reality of, local adaptations are recognized, there are few examples of implementation efforts where helpful adaptations were systematically fostered and replicated ( Aarons et al., 2012 Chambers and Norton, 2016). While there can be important benefits to learning from local adaptations, implementers should provide guidance on what aspects of the intervention—both the technical components and the delivery package—may be adapted without risking efficacy ( Kemp, 2016), and should also monitor for any harmful adaptations.

For countries currently planning the scale up of CHX, this case study from Bangladesh highlights intervention-specific lessons. Policymakers should consider CHX formulations and sourcing as important factors determining cost and procurement options. Communications should highlight the technical simplicity and evidence base for effectiveness/superiority of CHX, well-known facilitators of adoption from the ‘Diffusion of Innovations’ theory ( Rogers, 2003). In Bangladesh, the perception among policymakers, providers and families that CHX is simple and superior to previous cord care approaches was helpful in both securing policy changes for institutionalization and generating demand during expansion. Other studies indicate that CHX is broadly perceived to be simple and effective among international stakeholders ( Coffey et al., 2018) and that CHX is consistent with family preferences for active cord care that are present in many countries ( Coffey and Brown, 2017). Bangladesh’s attention to distinctive packaging of CHX with pictorial instructions helped to increase the visibility of CHX and also likely averted eye injury due to misapplications, which occurred in other countries implementing CHX, prompting the issuance of a World Health Organization alert ( World Health Organization, 2019). The only point of dissatisfaction with CHX that was infrequently reported in our study was that CHX caused a delay in cord separation. Mothers’ perceptions that the cord stump takes longer to separate when CHX is applied have been confirmed in an analysis of data from the Bangladesh CHX trial, which found that cord separation took 50% longer in newborns treated with CHX ( Mullany et al., 2013). Behaviour change messages may be designed to address this concern.

A limitation of this study is its focus on scale up on the public health sector. Although we were able to include some stakeholders from the non-profit sector, we were not able to explore provision of CHX in the private sector. There was significant adoption outside the public health system, as evidenced by 2017 private market sales of 235 000 bottles, according to records from the producer, ACI pharmaceuticals ( Callaghan-Koru et al., 2019). Further investigation is needed to understand the roles of the producer and local pharmacy shops in promoting private uptake.

A strength of this study is the use of CFIR domains to guide data collection and analysis related to barriers and facilitators ( Damschroder et al., 2009). Domains like intervention characteristics, scale-up process and outer setting, and several of their associated constructs, matched well with the phenomena we were studying. For example, outer setting considers external policy and incentives, which were important in the CHX scale up in Bangladesh through WHO policy changes and funding commitments from USAID. However, constructs in Inner Setting did not explicitly address several critical health systems barriers, such as supply chain weaknesses, provider vacancies and turnover, and limited patient access. Adaptation of the CFIR for low-income contexts, and the common health systems constraints experienced in those contexts, such as those outlined by Hanson et al. (2003), would enhance the utility of CFIR for global health researchers. As the body of implementation science in support of scale up continues to grow, the common use and adaptation of guiding theories and frameworks will likely help researchers and practitioners to synthesize information and better address implementation challenges ( Nilsen, 2015 Damschroder, 2019).

The scale up of CHX in Bangladesh was successful in institutionalizing a new intervention in the public health systems and expanding provider knowledge on CHX by training over 80 000 public sector workers. As a result of stakeholder engagement, rapid national training, and broad acceptance of CHX, by mid-2017 CHX was applied to ∼70% of live births at public facilities ( Callaghan-Koru et al., 2019). However, delays in institutionalizing CHX into logistics and monitoring systems resulted in many newborns not being reached when providers had already received training. To achieve a greater public health impact, the scale-up effort must expand to reach the 22% of deliveries that take place in private facilities and 62% that take place at home ( NIPORT et al., 2016). The barriers and facilitators for institutionalization and expansion during Bangladesh’s implementation of CHX provide lessons that are broadly applicable for scaling up CHX and other interventions in similar settings.

Map reading is the process of looking at the map to determine what is depicted and how the cartographer depicted it. This involves identifying the features or phenomena portrayed, the symbols and labels used, and information about the map that may not be displayed on the map. Reading maps accurately and effectively requires at least a basic understanding of how the mapmaker has made important cartographic decisions relating to map scale, map projections, coordinate systems, and cartographic compilation (selection, classification, generalization, and symbolization). Proficient map readers also appreciate artifacts of the cartographic compilation process that improve readability but may also affect map accuracy and uncertainty. Masters of map reading use maps to gain better understanding of their environment, develop better mental maps, and ultimately make better decisions. Through successful map reading, a person’s cartographic and mental maps will merge to tune the reader’s spatial thinking to the reality of the environment.

Buckley, A. R., and Kimerling, A. J. (2021). Map Reading. The Geographic Information Science & Technology Body of Knowledge (1st Quarter 2021 Edition), John P. Wilson (Ed.). DOI: 10.22224/gistbok/2021.1.8.

This entry was published on January 27, 2021.

This topic is also available in an earlier edition: DiBiase, D., DeMers, M., Johnson, A., Kemp, K., Luck, A. T., Plewe, B., and Wentz, E. (2006). Map reading. The Geographic Information Science & Technology Body of Knowledge. Washington, DC: Association of American Geographers. (2nd Quarter 2016, first digital).

cartographic map: a graphic representation of the environment that can be experienced physically, for example, through sight, sound, or touch

classification: the process of grouping or ordering features into categories (for qualitative data) or classes (for quantitative data)

geographic coordinate system: a positional reference system that uses latitude and longitude to define the locations of points on the surface of a sphere or ellipsoid

grid coordinate system: a coordinate system mathematically placed on a flat map projection surface

image map: a map made by superimposing traditional map symbols on an image base

land partitioning: the division of property into parcels

map marginalia: additional information displayed within the mapped area or outside the main map area that helps explain or support the map

map legend: the key to understanding the mapped features

map projection: a geometric transformation of the Earth’s spherical or ellipsoidal surface onto a flat map surface

map reading: the process of looking at the map to determine what is depicted and how the cartographer depicted it

map scale: the relationship between distances on the map and their corresponding ground distances also called cartographic scale

mental map: a map of the environment that people hold in their minds also called a cognitive map

parcel: an area of land that has some implication for landownership or land use

plat: a map drawn to scale to show the parcels within a legal subdivision

qualitative information: information that varies in type but not quantity

quantitative information: numerical data that represent an amount, magnitude, or intensity

positional reference system: a system used to pinpoint the coordinates of features in geographic space

relief: the three-dimensional nature of the terrain surface

remote sensing: the process of collecting images of the Earth and other planetary bodies from a distance

selection: the process of deciding what type of and how much information to portray on a map

symbolization: the process by which features and their attributes are represented by graphically stylized marks or signs, called symbols, and sometimes by labels

terrain surface: a three-dimensional portrayal of data about the elevations of the physical environment

Map reading is the process of looking at the map to determine what is depicted and how the cartographer depicted it (Kimerling, et al., 2016). This involves identifying the features or phenomena depicted, the symbols and labels used, and information about the map that may not be evident on the map. If the symbols on a map and how they came to be there cannot be understood, the features represented on the map cannot be translated into a mental image of the real environment. Therefore, map reading can be framed within a discussion of the tangible cartographic map, which is a graphic representation of the environment that can be experienced physically through sight, sound, or touch, and the mental or cognitive map of the environment that people hold in their minds. Ultimately, it is the map in their heads, not the map in their hands, that people use to make decisions. This encompassing view of maps allows the inclusion of a variety of map forms that are otherwise awkward to categorize, such as mental maps (see Participatory Cartography, forthcoming), web maps (which may exist ephemerally see Web Mapping), and new cartographic forms developed in the future.

Cartographic maps are valuable aids to help readers gain better understanding of their environment, develop better mental maps, and ultimately make better decisions. The map allows the reader to view the environment as if it were less complicated. There are advantages to such a simplified picture, but there is also the danger of an unrealistic view. Through successful map reading, cartographic and mental maps merge to tune the reader’s spatial thinking to the reality of the environment.

Map reading is a creative and sometimes difficult task because much of what exists in the environment is not shown on the map (see Scale & Generalization), and features on the map may not occur in reality but are instead interpretations of environmental characteristics. Although the mapmaker tries to translate reality into the clearest possible picture of the environment on the map, it is up to the map reader to convert this picture back into a useful mental image of the environment. Accordingly, different users may derive different understandings from the same map (MacEachren, 1995).

To effectively read a map, it is useful to understand what is involved in compiling a map. First, the environment is deconstructed into a selection of constituent features or phenomena that are classified and characterized. Second, meaningful and accurate data are gathered about the features or phenomena and their attributes. Third, the data are processed and manipulated so that the results can be displayed graphically using map symbols in a way that reveals something interesting or useful about the mapped environment (Kimerling, et al., 2016). The resulting graphical display shows the location and characteristics of geographic features and the relationships among geographic features.

Map reading starts with identifying depicted features or phenomena through their map symbols and associated labels. This mental activity is sometimes intuitive, especially if the symbols are familiar (e.g., blue lines for rivers and green polygon fills for vegetated areas (see Design and Aesthetics), features are clearly labeled (e.g., the only green line on the map is clearly labeled “Pacific Rim Trail”), or symbols mimic the feature they portray (e.g., a tent symbol is used to designate a camping area) (MacEachren, 1994). When the symbols cannot be interpreted intuitively, the map legend provides the key to understanding the mapped features (e.g., the topographic map and legend in Figure 1).

Figure 1. The symbols for the features shown on the Crater Lake topographic map by the U.S. Geological Survey (USGS) are identified in the legend. Source: USGS.

The first reading of the map should reveal the geographic area, subject, and form of representation of the features or phenomena shown. Map marginalia are additional graphics and text displayed within the mapped area or outside the main map area that help explain or support the map (Figure 2). The map legend is used to confirm the meaning of familiar symbols and provide the logic that underlies unfamiliar symbols. Alternatively, this information is sometimes in explanatory labels on the map itself or in text blocks (Brewer, 2015).

Figure 2. The map marginalia for the 1:24,000-scale and 1:250,000-scale maps of Crater Lake are primarily at the bottom of the page for the 1:100,000-scale map they are mostly on the right side of the page. Source: authors.

The following sections present important concepts that impact map reading. Because one of the most fundamental uses of maps is to find the locations of features, map readers must understand how locations on the Earth are transformed to locations on the map. These locations are represented by geographic or grid coordinates, or by using land partitioning systems. Understanding the spatial relationships among features is aided by knowledge of the basics of map projections (see Map Projections) and map scale. To appreciate which features are included and how they are represented on maps, readers benefit from an understanding of how cartographers select, classify, and generalize mapped features (see Scale & Generalization and Statistical Mapping). Knowing about symbolization helps readers understand properties or characteristics of the mapped features (see Symbolization & the Visual Variables). Finally, for some maps, reading is aided by knowledge of how the terrain (see Terrain Representation) and remote sensor images (see Remote Sensing Platforms) are used as a base for maps.

3.1 Geographic Coordinates

Maps show where things are located. Maps that allow for precise determination of the locations of features include a positional reference system. Such a system is based on a geometric model—either a sphere or an oblate ellipsoid—that approximates the true shape and size of the Earth (see Map Projections). Once the dimensions of the sphere or ellipsoid are defined, a graticule of parallels and meridians gives the latitude and longitude coordinates of a feature. The result is a geographic coordinate system—a positional reference system that uses latitude and longitude to define the locations of points on the surface of a sphere or oblate ellipsoid. For example, geographic coordinates in degrees of latitude and longitude are shown at the corner of the Crater Lake map in Figure 3. The locations of elevations measured relative to an average gravity or sea level surface called the geoid are defined by three-dimensional (latitude, longitude, elevation) coordinates.

Figure 3. Latitude and longitude coordinates are shown at the corner of the Crater Lake topographic map. Elevations are defined relative to an average sea level surface called the geoid. Source: USGS.

3.2 Grid Coordinate Systems

The latitude-longitude geographic coordinate system has been used for over 2,000 years as the primary worldwide geographic coordinate system (Slocum et al., 2009). However, geocentric latitude and longitude coordinates on the sphere or geodetic latitudes and longitudes on the oblate ellipsoid, still key to modern position finding, are not as well-suited for making measurements of length, direction, and area on the map. Thus, grid coordinate systems often are used for measurement instead of geographic coordinates.

A grid coordinate system is a Cartesian (x,y) coordinate system placed on a flat map projection surface. This positional reference system designates locations on a map using horizontal and vertical lines spaced at regular intervals so that coordinates can be read from the square grid of intersecting straight lines (Kimerling, et al., 2016). A commonly used grid coordinate system for the world is the Universal Transverse Mercator (UTM) system. UTM coordinates in meters are indicated in the margins of the Crater Lake map in Figure 3, and State Plane coordinates (SPC), also commonly used in the United States, are shown in feet. Map readers should therefore become familiar with the appearance and properties of these and other grid coordinate systems placed on maps to support positioning and measuring features on maps (see Plane Coordinate Systems, forthcoming).

3.3 Land Partitioning Systems

Land partitioning is the division of property into parcels, which are areas of land that have some implication for landownership or land use (Kimerling, et al., 2016). One of the first steps in the management of an area of land is to divide it into parcels that are then recorded on plats—maps drawn to scale to show the parcels within a legal subdivision. People interested in understanding details about landownership, zoning, taxation, and resource management often encounter plat maps and thus need to read them properly to understand their measurements and descriptions.

Land partitioning systems include both irregular (unsystematic) and regular (systematic) systems (Dent et al., 2008). Geometrically irregular schemes used in the United States include the metes-and-bounds system, French long lots, Spanish and Mexican land grants, and donation land claims. Regular systems, common in many parts of the world, include the U.S. Public Land Survey System (PLSS) and Canada's Dominion Land Survey, both of which are based on an array of townships and ranges. The PLSS is portrayed on the Crater Lake map in Figure 3 with red section and township lines and red township and range labels along the margins.

3.4 Map Scale

Maps always are smaller in size than the environment they represent. The amount of size reduction is known as the map or cartographic scale, which is the relationship between distances on the map and their corresponding ground distances (see Scale & Generalization). To use maps effectively, an understanding of important concepts relating to map scale is required, including how map scale is indicated on maps (verbal statements, representative fractions, and scale bars, as shown in Figure 4), how to covert between these indicators, and how to determine the scale of a map when no scale indicator is shown on the map (Kimerling, et al., 2016). Knowing the map scale is needed for correct map reading and use, especially when making measurements (Tyner, 2010).

Figure 4. The map scale for the Crater Lake maps is expressed as a representative fraction, a verbal scale, and four scale bars. Source: authors.

The features of interest must be displayed at the correct scale for many map use purposes (Robinson et al., 1995). Large-scale maps are used when a small ground area is mapped in detail with little generalization of features (see Scale & Generalization). When accurate distance, direction, and area measurements are required, only large-scale maps suffice. The distortion on a map at a scale of 1:250,000 or larger is relatively negligible, so these large-scale maps can be considered geometrically exact representations of the small section of Earth they cover. The Crater Lake map in Figure 3 has a scale of 1:100,000, supporting reliable reading for purposes such as navigation and wayfinding, geocaching, orienteering, and other activities that require accurate position, distance, and direction finding.

Small-scale maps provide a more generalized presentation of a larger area, such as a state, country, continent, or the entire globe. The scale changes continuously across small-scale maps, so scale indicators on these maps give the scale at a particular point or along a given line or lines but are not accurate for the entire map.

3.5 Map Projections

A map projection is a geometric transformation of the Earth’s spherical or ellipsoidal surface onto a flat map surface (see Map Projections)). Marginalia for the Crater Lake map (Figure 3) indicate that the map projection is the one that is used for UTM zone 10 north, which is a transverse Mercator projection with specific parameters to reduce distortion within the zone (between 126°W and 120°W and between the equator and 84°N). Knowing about commonly used map projections allows map readers to infer information about distance and area distortion even if it is not the projection explicitly stated on the map.

Map projections are considered one of the most bewildering aspects of map reading specifically and cartographic design generally (Kessler & Battersby, 2019). Failure to understand the impact of the projection on the resulting map has unfortunate consequences, as it hinders readers’ ability to understand how geographic features are distributed across the Earth. It also allows cartographers—through lack of understanding or by design—to use map projections in potentially deceptive ways.

All map projections onto a two-dimensional plane distort the three-dimensional Earth in some way. The two-dimensional planes on which maps are projected include cones, cylinders, or planes—these are called the developable surfaces (see Map Projections). Distortion in map projections is not only related to these surfaces, it is also related to the case (either tangent or secant) and aspect (equatorial, polar, transverse, or oblique) of the projection and the location of the standard point or line(s) of tangency on the developable surface (Figure 5). Combinations of these projection properties result in recognizable patterns of the graticule in map projections. Being able to identify projection properties through appearance of the graticule helps map readers better assess the geometric distortion on a map.

Figure 5. Properties of map projections, such as the developable surface, case, and aspect, influence the map projection’s geometric distortion. Only at the point and line or lines of tangency is scale true (scale factor or SF = 1). At all other locations, the SF is either smaller or larger than 1. Source: authors.

Map projections often are organized by the geometric properties that they preserve, such as areas or shapes of geographic regions as well as distances or directions from one point or a between a pair of points (Campbell, 2001). The ellipses of Tissot's Indicatrix can be used to visualize the spatial change in distortion across the map. Figure 6 illustrates, via these ellipses, the geometric properties that are distorted or preserved in a selection of common map projections. For a more complete guide to map projections and their properties, see the USGS Map Projections poster at

Figure 6. These map projections are commonly used because of the geometric properties they preserve. Distortion in the principal directions of a Tissot's Indicatrix (shown with an orange ellipse) helps visualize the geometric distortion at locations across the maps. Source: authors.

4.1 Cartographic Compilation

Maps are abstract representations of the geographic environment and not reality itself. Because maps are scaled down representations of the earth (see Scale & Generalization) that are projected onto a most often flat surface (see Map Projections), cartographers reduce complexity and increase clarity during the cartographic compilation process through selection, generalization, classification, and symbolization of the features on the map (Robinson, 1995).

For a cartographer, the first step in compiling information about the world into something that can be represented on a map is called selection—the process of deciding what type of and how much information to portray on a map. The cartographer’s selection of features is driven by the map’s subject and purpose. Once selected, features then are generalized into a simplified form appropriate for the map scale (see Scale & Generalization) for a range of generalization operators). The selected and generalized data can be further manipulated through classification, the process of grouping or ordering features into categories (for qualitative data) or classes (for quantitative data) (see Statistical Mapping).

The final step in the cartographic compilation process is symbolization, a process by which features and their attributes are represented by graphically stylized marks or signs, called symbols (see Symbolization & the Visual Variables), and sometimes by labels (see Typography). Symbols do not always take on the appearance of the geographic features they represent (see Map Icon Design), potentially requiring a legend, as discussed above.

Proficient map readers understand the consequences of decisions made throughout the cartographic compilation process. Skilled map readers also appreciate artifacts of the compilation process that improve readability but may also affect map accuracy and uncertainty (Tyner, 2015 see Representing Uncertainty).

4.2 Mapping Qualitative versus Quantitative Information

Many maps portray qualitative information—information that varies in type but not quantity (Robinson et al., 1995 Figure 7). Learning the basic principles of qualitative data symbolization help map readers understand how different types of point, line, and area features are symbolized on maps. Map readers should be able to understand the ways that cartographers depict a single feature using point, line, or area symbols, or combine multiple features in more complex reference or thematic maps and charts.

Figure 7. Qualitative visual variables for features represented by point, line, and area symbols on maps. Source: authors.

Similarly, map readers should understand the methods cartographers use to portray quantitative information—numerical data that represent an amount, magnitude, or intensity (see Common Thematic Maps, forthcoming Robinson et al., 1995 Figure 8). Quantitative information often is shown using classes to simplify the map and associated legend, but at the cost of potentially masking important variations in the data distribution (see Statistical Mapping). Accordingly, the variety of classification methods and their differences must be understood to properly read quantitative information on maps. For more information on mapping qualitative and quantitative information, see Symbolization & the Visual Variables) and Map Icon Design).

Figure 8. Quantitative visual variables for features represented by point, line, and area symbols on maps. Source: authors.

5.1 Example 1: Reading Terrain Maps

In mapping, a terrain surface is a three-dimensional portrayal of elevation data describing the physical environment (see Terrain Representation). For topographic maps, understanding relief—the three-dimensional nature of the terrain surface— is crucial to establishing position and studying spatial associations of the terrain with other geographic phenomena, such as vegetation and rainfall. Knowing the many ways that relief is portrayed cartographically—such as contours, relief shading, hypsometric tinting, and oblique views, to name only a few (Figure 9)—allows map readers to determine absolute or relative elevation on maps and to identify different terrain features (Slocum, et al., 2009).

Figure 9. The terrain around Crater Lake can be represented by contours, relief shading, hypsometric tinting, and combinations of these. An oblique projection provides a three-dimensional perspective of terrain compared to planimetric maps that portray the landscape from a vantage point directly above the mapped area. Source: authors.

5.2 Example 2: Reading Image Maps

The use of remote sensing in support of cartography has grown enormously over the last century (see Remote Sensing Platforms). Remote sensingis the process of collecting images of the Earth and other planetary bodies from a distance. These remotely sensed images capture features in the environment using cameras or other electronic imaging instruments (sensors) that are sensitive to the energy emitted or reflected from objects (Robinson et al., 1995 Campbell, 2001).

Although remotely sensed images are excellent for showing many features in the environment, they may fail to depict others, for example, political boundaries. Many useful map elements, such as symbolized features, labels, and reference grids, are absent on images. Features on images typically are not classified and identified in a legend. For these reasons, remotely sensed images often are made more interpretable and useful by cartographic enhancement, with overlaid symbols for point, line, and area features, as well as text for labels. A map made by superimposing traditional map symbols on an image base is called an image map (Kimerling, et al., 2016), which is a common option for web maps today (Figure 10).

Figure 10. This online image map of Crater Lake is annotated to show the roads and creeks near the southwestern rim of the crater. Note the distortion in the appearance of the trees near the lake which appear to be leaning toward the water. Source: authors.

Properly reading image maps is aided by an understanding of the many factors that can influence the appearance of the remotely sensed images, including the sensor’s vantage point, spectral sensitivity, technical quality, spatial resolution, and atmospheric conditions. Additionally, map readers may need to understand how to interpret black-and-white, true-color, and color-infrared imagery or how to identify variations in the appearance of features and patterns in images taken in the visible, near-infrared, thermal-infrared, and microwave (radar) portions of the electromagnetic spectrum.

Familiarity with the cartographic concepts and mapping methods outlined above give map readers an appreciation of the important decisions that are made about what to map and the methods used to show different aspects of the environment on maps. Understanding key concepts related to geographic locations, cartographic compilation, and unique map types help map readers to better understand the large and varied amount of geographic information that can be gathered from reading a map as well as the map’s accuracy or uncertainty. If map readers can merge their mental maps with their reading of cartographic maps, they will be better able to tune their spatial thinking to the reality of the environment. This is the ultimate goal of map reading, because it is the map in their heads, not the map in their hands, that people use to make decisions.

Brewer, C. A. (2015). Designing better maps: A guide for GIS users (2nd ed.) California, USA: Esri Press.

Campbell, J. (2001). Map use & analysis (4th ed.) New York: McGraw-Hill.

Dent, B., Torgusen, J., & Hodler, J. (2008). Thematic map design (6th ed.). New York: WCB/McGraw-Hill.

Kessler, F. C., & Battersby, S. E. (2019). Working with map projections: A guide to their selection. Boca Raton, FL: CRC Press.

Kimerling, A. J., Buckley, A. R., Muehrcke, P. C., & Muehrcke, J. O. (2016). Map use: Reading, analysis, interpretation (8th ed.). California Redlands, CA: Esri Press.

MacEachren, A. M. (1994). Some truth with maps: A primer on symbolization and design. Washington: Association of American Geographers.

MacEachren, A. M. (1995). How maps work: Representation, visualization, and design. New York: Guilford Press.

Robinson, A. H., Morrison, J. L., Muehrcke, P. C., Kimerling, A. J., & Guptill, S. C. (1995). Elements of cartography (6th ed.). New York: Wiley.

Slocum T. A., McMaster, R. B., Kessler, F. C., & Howard, H. H. (2009). Thematic cartography and geographic visualization (3rd ed.). Upper Saddle River, NJ: Pearson/Prentice Hall.

Tyner, J. A. (2010). Principles of map design. New York: McGraw-Hill.

Tyner, J. A. (2015). The world of maps: Map reading and interpretation for the 21st century. New York: Guilford Press.

  • Explain the relationship between a cartographic map and a mental or cognitive map.
  • Match the symbols on a map to their corresponding explanations in the legend.
  • Identify the location of the same feature on different maps that use the geographic coordinate system versus a grid coordinate system.
  • Illustrate the ways that scale can be indicated on a map and convert from one scale indicator to another.
  • Compare and contrast the distortions caused by map projections (for example, area, shape, length, and direction.)
  • Identify through appearance of the graticule the distortion in a map projection.
  • Explain the steps in the cartographic compilation process and discuss their impact on map reading.
  • Discuss the primary differences between mapping quantitative and qualitative data.
  • Find specified features on a topographic map and determine the elevation of these features.
  • Compare and contrast issues with reading topographic maps, terrain maps, and image maps.
  1. Give the latitude and longitude of the following places (express each in N–S, E–W notation and in ±90°, ±180° degree notation):
  • A quarter of the way around the Earth going west from the prime meridian, halfway from the equator to the north pole (45°N, 90°W 45°, –90°)
  • Intersection of the equator and prime meridian
  • A point one-third of the way from the equator to the north pole and one quarter of the way around the Earth, eastward from the prime meridian
  • A point halfway from the equator to the south pole and one-quarter of the way around the Earth, eastward from the prime meridian
  1. On a map of 1:40,000 scale, the distance measured between points A and B is 7.3 inches. What ground distance in miles and kilometers does this represent?
  2. Determine the representative fraction of a map at 0°, 30°, and 60° latitude, then create a scale bar for the map at each of these three latitudes.
  3. Using a topographic map segment, determine the UTM and SPC coordinates for a designated feature on the map.
  4. Describe the land partitioning systems responsible for the landscapes seen in images of three different areas of the country or world (for example, French long lots, metes and bounds, and PLSS).
  5. Take a 1:24,000 topographic quadrangle of your area into the field and compare the contours you see on the map with terrain features on the ground. Get a feel for the relationship between contours and terrain features. Repeat the process with a 1:100,000-scale topographic map of the same area. Is it easier or harder to relate contours to terrain features on this smaller-scale map?
  6. Using maps from the printed media (newspapers, news magazines), find examples of the different classes of qualitative symbols. Separate your examples into single-theme and multi-theme maps and by how the point, line, or area information is being displayed.
  7. Study quantitative thematic maps in books, atlases, newspapers, weekly news magazines, or from the Internet. For each map, determine what graphic elements were employed to create a magnitude message at the ordinal, interval, or ratio level of measurement. Search until you have found several examples of the techniques mapmakers use to portray data at these three levels of measurement. Sort your maps from the previous exercise into good and poor examples of cartographic design.
  8. Look for different types of remotely sensed images for a region of interest. Find examples of black-and-white, true-color, and color-infrared aerial photos. Also obtain scanner images taken in the visible light, near-infrared, thermal-infrared, and microwave (radar) portions of the spectrum. Select features on one image, and study how their appearance changes in the other types of imagery. Get a feel for the strengths and weaknesses of each type of imagery. What purpose does each serve, and how can the different images be used in combination to learn more than can be determined from any one image?
  9. Look at large-scale topographic maps at different scales and determine how the accuracy of each map is conveyed. Carefully study each map and its legend. Then, for each map, write down what you would have liked to have been told about its accuracy.

Bertin, J. (2010). Semiology of graphics: Diagrams, networks, maps. California: Esri Press.

Harley, J. B. (1988). Maps, knowledge, and power. In D. Cosgrove, & S. Daniels, (Eds.), Iconography of landscape: Essays on the symbolic representation, design and use of past environments, (277-311). Massachusetts: Cambridge University Press.

Kraak, M-J. (2014). Mapping time: Illustrated by Minard’s map of Napoleon’s Russian campaign of 1812. Redlands, CA: Esri Press.

Krygier, J., & Wood, D. (2016). Making maps: A visual guide to map design for GIS (2nd ed.). New York: Guilford Press.

Monmonier, M. S. (1996). How to lie with maps (2nd ed.). Illinois: University of Chicago Press.

Conic Map Projections

Secondly, conic map projections include the equidistant conic projection, the Lambert conformal conic, and Albers conic.

These maps are defined by the cone constant, which dictates the angular distance between meridians.

These meridians are equidistant and straight lines which converge in locations along the projection regardless of if there’s a pole or not.

Like the cylindrical projection, conic map projections have parallels that cross the meridians at right angles with a constant measure of distortion throughout. Conic map projections are designed to be able to be wrapped around a cone on top of a sphere (globe), but aren’t supposed to be geometrically accurate.

Conic map projections are best suited for use as regional or hemispheric maps, but rarely for a complete world map.

The distortion in a conic map makes it inappropriate for use as a visual of the entire Earth but does make it great for use visualizing temperate regions, weather maps, climate projections, and more.

4 Answers 4

Those look like latitude and longitude coordinates (e.g. 33.511615 degrees North latitude, 86.778809 West longitude for the first one) which place them in Alabama.

Are you sure those are some sort of map-specific coordinates and not Lat/Lon?

Edit: if they are Lat/Lon, check out this site which has a great-circle distance calculator and the formulae you'd need to do it yourself. This site allows decimal degrees rather than minutes/seconds so that may be more useful.

The distance between two points:

Point1, with coordinates lat1 and long1 Point2, with coordinates lat2 and long2

may be calculated as follows, using the haversine formula (code shown in Python):

Of course it should be adjusted and suited to your needs. Hope it helps.

I'm pretty sure you would need more information to scale the distance correctly. You would really need to know the model scale.

This is sort of a complex problem which requires a bit more information and some consideration of what you are hoping to accomplish with the solution.

I am going to assume that the coordinates you gave above are from the WGS 84 datum, which is the most common method used by modern gps systems today.

However, it is very important that you also know which map projection these coordinates will be plotted on. Basically, since the earth is round (and actually not perfectly round, it is slightly ellipsoid due to the rotation of the earth), when we project this onto a flat surface, it can be done in many ways. The method used to 'flatten' the Earth and also the scale of the map (how 'zoomed in' is the map) can have a strong impact on how useful a x/y scale can be for measurements on the map image.

What maps are you projecting onto? If I knew more, I think I could be of better assistance.

Author information


Clinical Research Utilization (CRU), Karolinska University Hospital, Eugeniahemmet T4:02, SE-171 76, Stockholm, Sweden

Kerstin Nilsson Kajermo & Lars Wallin

Knowledge Utilization Studies Program (KUSP), Faculty of Nursing, University of Alberta, 5-104 Clinical Science Building, Edmonton, Alberta, T6G 2G3, Canada

Anne-Marie Boström & Carole A Estabrooks

Department of Neurobiology, Care Sciences and Society, Karolinska Institutet, Alfred Nobels Allé 23, 23 300, SE-141 83, Huddinge, Sweden

Anne-Marie Boström & Lars Wallin

Northern Ontario School of Medicine, 955 Oliver Road, Thunder Bay, Ontario, P7B 5E1, Canada

School of Nursing, Deakin University and Cabrini-Deakin Centre for Nursing Research, Cabrini Institute, 183 Wattletree Road Malvern 3144, Victoria, Australia