Business Metrics and Measures

Measures are data points that can be used to track and quantify business performance. They are often represented as a percentage of total sales. Metrics and measures help you set targets for your business and evaluate progress toward those goals.

Most measurements are based on some kind of standard, like the length of an inch or the weight of a pound. These standards come from historical agreements and laws.

Units

The units used to describe macroscopic quantities are determined by the International System of Units (SI). The SI contains base and derived units for all physical quantities, such as length and mass. Unit conversions between these and other units are much simpler within the metric system than between metric and non-metric systems, thanks to its coherence and its use of prefixes that act as powers-of-10 multipliers.

Historically many of the standard units were based on aspects of human dimensions, such as the cubit, based on the length of the forearm; the pace, based on the distance of one stride; and the foot and hand. These are called anthropic units, and they can vary from person to person.

Ideally we would like to have a uniform system of measurement, based on microscopic features of matter and fundamental laws of physics, such as the speed of light. This is the goal of metrology, which is a scientific discipline whose aim is to establish internationally agreed standards for weights and measures.

Types

The measurement of a physical quantity requires a standardized unit. The chosen standard is called a fundamental or base unit and is used as the basis for obtaining other derived units by combining it with other physical quantities. For example, the unit of length is the meter (symbol: m), a definite predetermined value.

The choice of a unit is important and can have consequences. In particular, mistakes in converting from one system of measurement to another can have serious effects. The Air Canada Boeing 767 that ran out of fuel in mid-flight in 1983 was a result of a mistake in converting from imperial to metric measurements.

A measure has three kinds of information: the size or magnitude of the measurement (a number); a standardized unit to compare it with; and an indication of its uncertainty (see Figure 1.19). Each of these components is discussed in more detail below. The uncertainty of a measurement is usually expressed using the terms precision and accuracy.

Magnitude

The magnitude of a scalar (non-vector) quantity provides a numerical value that indicates its size. It can be either an absolute or a relative measure, depending on the unit of measurement used to express it.

In astronomy, the magnitude of a star or other celestial object measures how bright it appears to be. The brightest stars have magnitudes near zero, and the dimmer ones have magnitudes further away from zero. The magnitude scale is arranged in a way that makes it easier to compare the brightness of different objects.

The ancient Greek scientist Hipparchus developed the system of magnitudes for stars, which groups them into six categories based on their brightness as seen by the eye. This is an approximate scale, but it works reasonably well. The human eye has a logarithmic response to changes in brightness, so that each increment of one magnitude corresponds roughly to a doubling in brightness. This is why magnitudes are often expressed as numbers multiplied by a power of ten, rather than by a base-10 number like 10 or 100.

Uncertainty

Uncertainty is the extent to which a measurement result varies from its true value. It is the result of a combination of factors such as: limitations to the precision and accuracy of measuring instruments; the inherently variable nature of the phenomena being measured (e.g. the period of oscillation of a pendulum can vary by a number of seconds); and the limits of human knowledge.

It is impossible to achieve a single, exact measurement of any quantity, even with the most precise instrument or technique. Every measurement involves error, and these errors can be either random or systematic.

The aim of calculating uncertainty values is to give an indication of the likely range of measurement results within which the true value may lie, assuming that the underlying phenomenon remains constant. For this reason, uncertainty is also referred to as confidence intervals. For more information on measurement uncertainty and how to calculate it, see the ‘Metrology’ section of this site.

Posted in News.