What Is The Difference Between Accuracy And Precision? - Best Essay Writing Service Reviews Reviews | Get Coupon Or Discount 2016
Free Essays All Companies All Writing Services

What is the difference between accuracy and precision?

Ans. Accuracy and Precision are two important factors related with GIS. Their meaning and difference between these two is explained below: 1. Accuracy: The statistical meaning of accuracy is the degree with which an estimated mean differs from the true mean. In the GIS context, the term accuracy refers to the degree to which the information presented on a map or in a digital database matches with true or accepted values. The accuracy is directly related with quality of data and number of errors that exist in a dataset or map.

In discussing a GIS database, accuracy could be of various types, such as horizontal and vertical with respect to geographic position, as well as attribute, conceptual, and logical accuracy. The level of accuracy required for specific applications varies greatly. Highly accurate data can be very difficult and costly to produce and compile. Sometimes the true value cannot be determined so an accepted value is used as the reference. For Example, if the elevation of a point on a topographic map is determined to be 100 meters, then, accuracy can be calculated by determining how close is that to the real value?

In some situation, it is not possible to determine the true value. In such cases, an accepted value is used for the reference. Suppose, the coordinates of a location read from a map are (10, 20), but, those calculated from accurate GPS data are found to be (10, 15). Then, generally, the values calculated through GPS are considered more accurate. 2. Precision: The term precision is used in different contexts. With reference to GIS, precision is the level of measurement and exactness of description in a GIS database. Precise data of a location may measure position of an object to a fraction of a unit.

Precise information about the attributes specifies the characteristics of a particular feature in a detailed way. Precision is the recorded level of your data. Computers store data with a high level of precision. However, a high level of precision does not imply a high level of accuracy. Hence, It is important to realize the difference between accuracy and precision. No matter how carefully it has been measured, the precise data may still be inaccurate. This could be due to various reasons, such as, the surveyors may make mistakes or data may be incorrectly entered into the database.

Precision can be classified in two categories— Single Precision and Double Precision. The term single precision is used to denote a lower level, while the term Double precision refers to a comparatively higher level of coordinate accuracy, which is based on the possible number of significant digits which can be stored for each coordinate. Single precision numbers can store up to 7 significant digits for each coordinate and therefore they are able retain a precision of +/- 5 meters in an extent of 1,000,000 meters.

On the other hand, Double precision can store up to 15 significant digits for each coordinate. Hence, it retains the accuracy of much less than 1 meter at a global extent. Q. 2- Write a brief explanation of the following tears and give examples: applicability, bias, compatibility, completeness, consistency. Ans. (i) Applicability: The term applicability is used to describe the relevance of the data in being appropriate to be applied. In other words, it is the suitability or appropriateness of a data set. Applicability is viewed in terms of a set of commands or operations.

For example, if there is no match between data used for interpolation done with the help of Thiessen polygon method, then we may face an unsuitable match between the data and the technique because the data varies continuously, while the Thiessen polygon method assumes abrupt variations. Alternatively, applicability can also be used to describe the suitability of data to solve a particular problem. (ii) Bias: With reference to GIS, the term Bias is used to represent the systematic variation of the calculated data from the actual data. It is a constant error which exists throughout a data set.

An example is consistent truncation of the decimal points from the data values by software program. This example has a technical source. Similarly, human sources of bias also exist. For example, an aerial photograph interpreter may have habit of ignoring all the objects smaller than a particular size. Of course, such consistent errors may be easily corrected, but, usually it is very difficult to locate them. Bias is one of the problems that affect the quality of individual data sets. (iii) Compatibility: A GIS data set should be spatially and temporally complete.

It should also include information about the attributes. Different data sets in a GIS database should be compatible, so that a sensible result can be produced. For example, overlaying of two maps, which were originally mapped at two different scales, will produce useless result due to the incompatibility between their scales. Identical methods of data capture, storage, manipulation and editing should be used in order to ensure compatibility. With the help of GIS, it is possible to overlay two maps, even if they were originally mapped at different scales.

However, the result shall still be useless because of the incompatibility between the scales. It is very difficult to combine maps, which contain data measured in different scales of measurement. In order to ensure compatibility, the ideal method is to develop the data sets using similar methods of data capture, storage, manipulation and edition. (iv) Completeness: A complete data set will entirely cover the study area and the time period of interest. The data should be spatially and temporally complete and should also have a complete set of attribute information.

Completeness of polygon data is relatively easy to determine. Errors in attribute data might be seen, if polygons lack attribute information, and errors in spatial data may be present, if polygon has two sets of attribute data. Similarly, completeness of line and point data is less obvious to exist as some features may not be available in the database. The only way to check the completeness is to compare with another source. In case of data related to time series, it is very difficult to define completeness. Almost all time series data in GIS are referenced to discrete moments in time.

However, as the time is continuous, the set of maps related with a series of discrete times is incomplete, and requires board assumptions to be made regarding change in the intervening periods. (v) Consistency: Consistency applies to individual data sets as well as within them. Inconsistency can exist within data sets where some sections have a different source or have been digitized by different people. This will cause spatial variation in the error characteristics of the final data layer. Problems of inconsistency may also arise depending on the manner of data collection.

For example the meteorological stations maintain the record of snowfall. The equipments used to measure snowfall could be of different ages and designs. Hence, the accuracy of the measurements may vary from one station to the other. This results in inconsistency. Q. 3- Describe how spatial process models can be used to forecast the behavior of physical systems. Ans. The term spatial data refers to the information about the location and shape of geographic features and relationships among them. It includes remotely sensed data as well as map data.

In other words, spatial data refers to geographic areas or features. These features occupy a location. On the other hand, non-spatial data has no specific location in space. However, it can have a geographic component and can be linked to a geographic location. Tabular and attribute data also are non-spatial data, which can be linked to a particular location. For Example: In the map of a city, any location, such as a park, is a spatial feature and the associated information about the park like its name or area are non-spatial attributes which are linked to the park by its location.

A process model combines the existing knowledge about environmental processes of the real world into a set of relationships & equations for quantifying the processes. In a spatial context, forecasting what may happen in the future under a given set of conditions is probably one of the most challenging tasks for physical and environmental modelers. There are several approaches to process modeling. Forecasting models tend to be dynamic, with one or all of the input variables changing over time.

To decide which approach is suitable for a particular situation, it is necessary to understand the range of available models as well as their strengths and weaknesses. Process models are classified into two types: priori or posteriori. A priori model is used to model processes for which a body of theory has not been established yet. In these situations, the model is used to help in searching for a theory. On the other hand, a posteriori model is designed to explore an established theory. These models are usually constructed whit an attempt of applying the theory to a new area. A model can change from a posteriori to a priori and vice versa.

Beyond these two categories, developing a further classification of process models becomes very complex. This classification includes Natural and scale analogue models, Conceptual models and Mathematical models. In GIS, all these three approaches are used in isolation, substituted for each other in an iterative development process or combined in a larger, more complex model. Different modeling techniques can be used together to build up complex models of spatial processes. Q. 4- How will developments in related fields, such as hardware, communications and multimedia, influence the future of GIS?

Ans. GIS technology depends on and is affected by several different fields such as hardware, communications and multimedia. So, the changes or improvements in these fields have an impact on the development of GIS. The initial developments in computer technology have enabled improvements in GIS as well. The decreasing cost of computer power over the last few decades has been one stimulus for wider use of GIS. Because of the unmatched features provided by GIS, it has continuously proved itself to be one of the most helpful technologies of the world.

As a result, the use of GIS in various fields is continuously increasing. An active GIS market has resulted in reduced costs and continual improvements in GIS hardware, software, and data. In the future, these developments will lead to wider application of the technology in various sectors like government, business, and industry. It is expected that GIS and related technologies will help in the analysis of large data-sets, resulting in a better understanding of terrestrial processes and human activities to improve economic vitality and environmental quality.

Improvements in graphics technology, data access and storage methods, digitizing, programming and human-machine interfaces; and developments in systems theory will have an important impact on GIS. Hardware developments in screens, printers and plotters and input devices such as digitizers and scanners have also had a major impact on GIS developments. The use of multimedia tools such as audio, imaging and video technology and virtual reality will also enhance the modeling and presentation of GIS applications. Q. 5- Describe the MEC approach to modeling the decision-making process. Ans.

MCE is an acronym for ‘Minimum Classification Error’. In this approach, first step is to define a problem. There are four main types of problems in MCE. These are- doubts related with adequate and correct definition of the problem, the expected degree of propagation of errors in the data & uncertainty about their quality, the independence of data, and the robustness & sensitivity of the method. Similarly, there are two major sources of uncertainty associated with defining the scope of a multi-criteria problem. These are- Failures to identify all relevant criteria and the uncertainty about the consideration of all relevant alternatives.

MCE is one of the methods for combining data according to their importance in decision-making. At a conceptual level, MCE methods involve qualitative or quantitative scoring or ranking criteria to reflect their importance to either a single or a multiple set of objectives. MCE techniques are numerical algorithms that define the suitability of a particular solution on the basis of the input criteria and weights together with some mathematical or logical means of determining trade-offs when conflicts arise. In MCE a range of criteria which are expected to influence the decision must be defined.

These criteria can be imagined as layers of data for a GIS. GIS is an ideal framework for using MCE to model spatial decision-making problems because it provides data management and display facilities lacking in MCE software. On the other hand, MCE provides GIS with the means of elevating complex multiple criteria decision problem where conflicting criteria and objectives are present. Together, GIS and MCE based systems have the potential to provide the decision maker with a more rational, objective and unbiased approach to spatial decision-making and support until now.

The major problems in MCE are the choice of MCE algorithm and the specification of weights. Different MCE algorithms may produce slightly different results when used with the same data and the same weights. 6. What are the main sources of error in GIS data input, database creation and data processing? Ans. Error is the occurrence of incorrect output as a result of incorrect information obtained from the source. The main sources of errors in GIS are survey data and aerial or remotely sensed data. Both of these sources of spatial and attribute data for GIS can include errors.

For example, there could be errors in a survey data if the people who operate the equipments or record the observations make a mistake or if there are any technical problems with the equipment being used. Similarly, the attribute data could also contain errors if features were recorded incorrectly by the operator. In this case, attribute errors would occur if the characteristics of the respondents were wrongly allocated, or incorrectly noted. On the other hand, remotely sensed and aerial photography data could have spatial errors if there were mistakes during spatially referencing the images.

Mistakes in classification and interpretation would create attribute errors. The most frequently used sources of data for GIS are Maps. Maps contain relatively straightforward spatial & attribute errors, which are a result of human or equipment failures, as well as more complex errors, arising because of the cartographic techniques used in the map-making process. A process known as ‘Data encoding’ is used to transfer the data obtained from a non-GIS source, such as the paper map, satellite image or survey, into a GIS format.

Though, the hardware that can automatically convert paper maps into digital form is available, but, still much of the digitizing of paper maps is still done using a manual digitizing table. Researchers believe that manual digitizing is one of the main sources of error in GIS. There are various sources of error within the digitizing process. These could be classified into two main types: source map error and operational error. Once the encoding of data is complete, then the next step is cleaning and editing.

It is important to spot and remove all the errors during cleaning and editing phase as this is the phase after which the data are used for analysis. Many problems can be eliminated by careful examining the data. Cleaning and editing are not among the potential sources of error. In fact, they are positive processes. After cleaning and editing data it may be necessary to convert the data from vector to raster or vice versa. During this process, both the size of the raster and method used for rasterization have important implications for positional error.

The size of the cell is directly proportional to the precision of the resulting data. Errors may also be introduced during the manipulation and analysis phase of the GIS database. Getting assured about some of the points like the suitability of the data for analysis, suitability of the format, compatibility of the data sets, relevance of the data, appropriateness of the technique for the desired output etc. can help a GIS user in a proper GIS analysis. Works Cited 1. Haywood Ian, Cornelius Sarah, ‘An Introduction to Geographical Information Systems` 3rd Ed.

, Pearson Prentice Hall 2. He Yong, Bian Fulin, Wang Xichun, ‘The Research on Spatial Process Modeling in GIS’ School of Remote Sensing and Information Engineering, Wuhan University, China 3. http://wps. prenhall. com/ema_uk_he_heywood_introgis_2/0,7356,583934-,00. html 4. http://www. sli. unimelb. edu. au/gisweb/GISModule/GIST_Raster. htm 5. http://www. wiley. co. uk/wileychi/gis/Volume1/BB1v1_ch3. pdf 6. http://www. sfu. ca 7. http://www. cise. ufl. edu/~mssz/GIS/GIS-prob1. html

Sample Essay of RushEssay.com