How is measurement related to science




















Temperature itself is the measurement of the average kinetic energy of a substance. The kinetic energy arises from the motion of atoms and molecules, and it is postulated that at absolute 0, there is no motion and therefore, no kinetic energy. Today that is referred to as 0 K on the Kelvin thermodynamic temperature scale.

Modern methods have refined the measurement to Temperature can be measured and represented in many different ways. The fundamental requirements of the practice involve accuracy, a standard, linearity, and reproducibility.

The SI unit, chosen for its simplicity and relationship to thermodynamics, is the kelvin, named in honor of Lord Kelvin. While incrementally equal to the Celsius scale, the temperature in kelvins is a true representation of the kinetic energy in a thermodynamic sense. Chemistry and physics require many calculations involving temperature. Those calculations are always made in kelvins.

Comparison of temperature scales : Temperatures of some common events and substances in different units. A comparison of temperature scales table illustrates a variety of temperature scales, some of which are no longer used.

It is interesting to see the temperatures of commonly occurring events over these scales, and to imagine the great hurdles that were overcome in developing modern thermometry. Conversion to and from kelvin : Use the equations in this table to calculate temperatures using the kelvin measurement system. Although in most cases scientists are equipped with some sort of electronic calculator, there might be times when a conversion from one scale to another is required.

Conversion tables can be used to convert a measurement to any scale from any other temperature scale, such as kelvin or Celsius. Conversion to and from degrees Celsius : Use the equations in this table to convert temperatures to the Celsius measurement system. Privacy Policy. Skip to main content. Introduction to Chemistry. Search for:. Learning Objectives Recognize SI units and their importance for measurment. Key Takeaways Key Points Every field of science involves taking measurements, understanding them, and communicating them to others.

The SI system, also called the metric system, is used around the world. There are seven basic units in the SI system: the meter m , the kilogram kg , the second s , the kelvin K , the ampere A , the mole mol , and the candela cd. Key Terms SI system : A series of units that is accepted and used throughout the scientific world.

SI Unit Prefixes The basic SI units can be expressed as fractions and multiples of basic units by using a set of simple prefixes. Learning Objectives Convert between SI units. Key Takeaways Key Points The set of prefixes is simple and easy to use. Prefixes cannot be combined. The set of prefixes is universal.

Key Terms prefix : One or more letters or syllables added to the beginning of a word to modify its meaning; for example, kilo can be added to gram to create kilogram fraction : a part of a whole, especially a comparatively small part. Volume and Density Density and volume are two common measurements in chemistry. Learning Objectives Describe the relationship between density and volume. Key Takeaways Key Points The volume of a substance is related to the quantity of the substance present at a defined temperature and pressure.

The volume of a substance can be measured in volumetric glassware, such as the volumetric flask and the graduated cylinder. Density indicates how much of a substance occupies a specific volume at a defined temperature and pressure. The density of a substance can be used to define the substance.

Measurement gives us a way to communicate with one another and interact with our surroundings — but it only works if those you are communicating with understand the systems of measurement you are using. Imagine you open a recipe book and read the following:. Mix white sugar 10 with flour 1 and water Wait for 1, and then bake for 1. How would you go about using this recipe? How much sugar do you use? How much flour or water?

How long do you wait? All measurement involves two parameters: the amount present i. The recipe lists the amounts 1, 10, and , but not the units. Without both parameters, the information is virtually useless.

To see a recipe with amounts and units, see Figure 3. There are many different systems of measurement units in the world, but one commonly used in science is the metric system described in more detail in our Metric System module. Standard units exist in the metric system for a host of things we might want to measure, ranging from the common such as the gram mass and liter volume , to the more obscure such as the abampere, or biot, an electromagnetic unit of electrical current.

Despite our best efforts to organize and standardize measurement, there still exist non-standard units that do not fit neatly into any formal systems of measurement. This may be because the exact quantity may not be known, or because it has some historical relevance. For example, horses continue to be measured in the unit of height called the hand equal to 4 inches out of tradition.

But other examples do exist; for example, the "serving size" we often encounter on pre-packaged food Figure 4.

Serving sizes vary depending on the type of item you are eating. This caused scientific uncertainty. Changing the mass standard from an artefact to a fundamental physical constant may seem relatively simple in principle, but it was technically challenging. Metrologists had to demonstrate that the alternatives are accurate and reliable.

The SI system standardised international measurement systems. Read about earlier measurement systems — from the Babylonians to more recent European systems. An interactive timeline will take you through many of the significant milestones. The student activities covering measurement range from quirky to precise.

Follow this up with Cubits, spans and digits to reinforce the degree of uncertainty when using non-SI measurements. Measuring foot pressure provides practice using SI units, derived units and prefixes. Precision and accuracy provides various datasets for students to judge precision and accuracy in scientific settings. How long is it? Moreover, under their interpretation measurement theory becomes a genuine scientific theory, with explanatory hypotheses and testable predictions.

Building on this work, Jo Wolff a has recently proposed a novel realist account of quantities that relies on the Representational Theory of Measurement. Specifically, an attribute is quantitative if its structure has translations that form an Archimedean ordered group. It also means that being a quantity does not have anything special to do with numbers, as both numerical and non-numerical structures can be quantitative. Information-theoretic accounts of measurement are based on an analogy between measuring systems and communication systems.

The accuracy of the transmission depends on features of the communication system as well as on features of the environment, i. The accuracy of a measurement similarly depends on the instrument as well as on the level of noise in its environment. Conceived as a special sort of information transmission, measurement becomes analyzable in terms of the conceptual apparatus of information theory Hartley ; Shannon ; Shannon and Weaver Ludwik Finkelstein , and Luca Mari suggested the possibility of a synthesis between Shannon-Weaver information theory and measurement theory.

As they argue, both theories centrally appeal to the idea of mapping: information theory concerns the mapping between symbols in the input and output messages, while measurement theory concerns the mapping between objects and numbers.

If measurement is taken to be analogous to symbol-manipulation, then Shannon-Weaver theory could provide a formalization of the syntax of measurement while measurement theory could provide a formalization of its semantics. Nonetheless, Mari also warns that the analogy between communication and measurement systems is limited.

Information-theoretic accounts of measurement were originally developed by metrologists — experts in physical measurement and standardization — with little involvement from philosophers. Independently of developments in metrology, Bas van Fraassen — has recently proposed a conception of measurement in which information plays a key role.

He views measurement as composed of two levels: on the physical level, the measuring apparatus interacts with an object and produces a reading, e. Measurement locates an object on a sub-region of this abstract parameter space, thereby reducing the range of possible states and This reduction of possibilities amounts to the collection of information about the measured object.

Since the early s a new wave of philosophical scholarship has emerged that emphasizes the relationships between measurement and theoretical and statistical modeling Morgan ; Boumans a, ; Mari b; Mari and Giordani ; Tal , ; Parker ; Miyake The central goal of measurement according to this view is to assign values to one or more parameters of interest in the model in a manner that satisfies certain epistemic desiderata, in particular coherence and consistency.

Model-based accounts have been developed by studying measurement practices in the sciences, and particularly in metrology. Metrologists typically work at standardization bureaus or at specialized laboratories that are responsible for the calibration of measurement equipment, the comparison of standards and the evaluation of measurement uncertainties, among other tasks.

It is only recently that philosophers have begun to engage with the rich conceptual issues underlying metrological practice, and particularly with the inferences involved in evaluating and improving the accuracy of measurement standards Chang ; Boumans a: Chap. A central motivation for the development of model-based accounts is the attempt to clarify the epistemological principles underlying aspects of measurement practice.

For example, metrologists employ a variety of methods for the calibration of measuring instruments, the standardization and tracing of units and the evaluation of uncertainties for a discussion of metrology, see the previous section. Traditional philosophical accounts such as mathematical theories of measurement do not elaborate on the assumptions, inference patterns, evidential grounds or success criteria associated with such methods.

As Frigerio et al. By contrast, model-based accounts take scale construction to be merely one of several tasks involved in measurement, alongside the definition of measured parameters, instrument design and calibration, object sampling and preparation, error detection and uncertainty evaluation, among others —7.

Other, secondary interactions may also be relevant for the determination of a measurement outcome, such as the interaction between the measuring instrument and the reference standards used for its calibration, and the chain of comparisons that trace the reference standard back to primary measurement standards Mari Although measurands need not be quantities, a quantitative measurement scenario will be supposed in what follows.

Two sorts of measurement outputs are distinguished by model-based accounts [JCGM 2. As proponents of model-based accounts stress, inferences from instrument indications to measurement outcomes are nontrivial and depend on a host of theoretical and statistical assumptions about the object being measured, the instrument, the environment and the calibration process.

Measurement outcomes are often obtained through statistical analysis of multiple indications, thereby involving assumptions about the shape of the distribution of indications and the randomness of environmental effects Bogen and Woodward — Measurement outcomes also incorporate corrections for systematic effects, and such corrections are based on theoretical assumptions concerning the workings of the instrument and its interactions with the object and environment.

Systematic corrections involve uncertainties of their own, for example in the determination of the values of constants, and these uncertainties are assessed through secondary experiments involving further theoretical and statistical assumptions.

Moreover, the uncertainty associated with a measurement outcome depends on the methods employed for the calibration of the instrument. Calibration involves additional assumptions about the instrument, the calibrating apparatus, the quantity being measured and the properties of measurement standards Rothbart and Slayden ; Franklin ; Baird Ch. Finally, measurement involves background assumptions about the scale type and unit system being used, and these assumptions are often tied to broader theoretical and technological considerations relating to the definition and realization of scales and units.

These various theoretical and statistical assumptions form the basis for the construction of one or more models of the measurement process. Measurement is viewed as a set of procedures whose aim is to coherently assign values to model parameters based on instrument indications. Models are therefore seen as necessary preconditions for the possibility of inferring measurement outcomes from instrument indications, and as crucial for determining the content of measurement outcomes.

As proponents of model-based accounts emphasize, the same indications produced by the same measurement process may be used to establish different measurement outcomes depending on how the measurement process is modeled, e. As Luca Mari puts it,. Similarly, models are said to provide the necessary context for evaluating various aspects of the goodness of measurement outcomes, including accuracy, precision, error and uncertainty Boumans , a, , b; Mari b.

Model-based accounts diverge from empiricist interpretations of measurement theory in that they do not require relations among measurement outcomes to be isomorphic or homomorphic to observable relations among the items being measured Mari Indeed, according to model-based accounts relations among measured objects need not be observable at all prior to their measurement Frigerio et al. Instead, the key normative requirement of model-based accounts is that values be assigned to model parameters in a coherent manner.

The coherence criterion may be viewed as a conjunction of two sub-criteria: i coherence of model assumptions with relevant background theories or other substantive presuppositions about the quantity being measured; and ii objectivity, i.

The first sub-criterion is meant to ensure that the intended quantity is being measured, while the second sub-criterion is meant to ensure that measurement outcomes can be reasonably attributed to the measured object rather than to some artifact of the measuring instrument, environment or model.

Taken together, these two requirements ensure that measurement outcomes remain valid independently of the specific assumptions involved in their production, and hence that the context-dependence of measurement outcomes does not threaten their general applicability.

Besides their applicability to physical measurement, model-based analyses also shed light on measurement in economics. Like physical quantities, values of economic variables often cannot be observed directly and must be inferred from observations based on abstract and idealized models. The nineteenth century economist William Jevons, for example, measured changes in the value of gold by postulating certain causal relationships between the value of gold, the supply of gold and the general level of prices Hoover and Dowell —; Morgan Taken together, these models allowed Jevons to infer the change in the value of gold from data concerning the historical prices of various goods.

The ways in which models function in economic measurement have led some philosophers to view certain economic models as measuring instruments in their own right, analogously to rulers and balances Boumans , c, , a, , a, ; Morgan Marcel Boumans explains how macroeconomists are able to isolate a variable of interest from external influences by tuning parameters in a model of the macroeconomic system.

This technique frees economists from the impossible task of controlling the actual system. As Boumans argues, macroeconomic models function as measuring instruments insofar as they produce invariant relations between inputs indications and outputs outcomes , and insofar as this invariance can be tested by calibration against known and stable facts. When such model-based procedures are combined with expert judgment, they can produce reliable measurements of economic phenomena even outside controlled laboratory settings Boumans Chap.

Another area where models play a central role in measurement is psychology. The measurement of most psychological attributes, such as intelligence, anxiety and depression, does not rely on homomorphic mappings of the sort espoused by the Representational Theory of Measurement Wilson These models are constructed from substantive and statistical assumptions about the psychological attribute being measured and its relation to each measurement task.

For example, Item Response Theory, a popular approach to psychological measurement, employs a variety of models to evaluate the reliability and validity of questionnaires. One of the simplest models used to calibrate such questionnaires is the Rasch model Rasch New questionnaires are calibrated by testing the fit between their indications and the predictions of the Rasch model and assigning difficulty levels to each item accordingly.

The model is then used in conjunction with the questionnaire to infer levels of English language comprehension outcomes from raw questionnaire scores indications Wilson ; Mari and Wilson Psychologists are typically interested in the results of a measure not for its own sake, but for the sake of assessing some underlying and latent psychological attribute, e. A good fit between item responses and a statistical model does not yet determine what the questionnaire is measuring.

One way of validating a psychometric instrument is to test whether different procedures that are intended to measure the same latent attribute provide consistent results. A construct is an abstract representation of the latent attribute intended to be measured, and. Constructs are denoted by variables in a model that predicts which correlations would be observed among the indications of different measures if they are indeed measures of the same attribute.

In recent years, philosophers of science have become increasingly interested in psychometrics and the concept of validity. One debate concerns the ontological status of latent psychological attributes. Elina Vessonen has defended a moderate form of operationalism about psychological attributes, and argued that moderate operationalism is compatible with a cautious type of realism Another recent discussion focuses on the justification for construct validation procedures.

According to Anna Alexandrova, construct validation is in principle a justified methodology, insofar as it establishes coherence with theoretical assumptions and background knowledge about the latent attribute. This defeats the purpose of construct validation and turns it into a narrow, technical exercise Alexandrova and Haybron ; Alexandrova ; see also McClimans et al. A more fundamental criticism leveled against psychometrics is that it dogmatically presupposes that psychological attributes can be quantified.

Michell , b argues that psychometricians have not made serious attempts to test whether the attributes they purport to measure have quantitative structure, and instead adopted an overly loose conception of measurement that disguises this neglect.

In response, Borsboom and Mellenbergh argue that Item Response Theory provides probabilistic tests of the quantifiability of attributes. Psychometricians who construct a statistical model initially hypothesize that an attribute is quantitative, and then subject the model to empirical tests.

When successful, such tests provide indirect confirmation of the initial hypothesis, e. Several scholars have pointed out similarities between the ways models are used to standardize measurable quantities in the natural and social sciences. Others have raised doubts about the feasibility and desirability of adopting the example of the natural sciences when standardizing constructs in the social sciences.

Examples of Ballung concepts are race, poverty, social exclusion, and the quality of PhD programs. Alexandrova points out that ethical considerations bear on questions about the validity of measures of well-being no less than considerations of reproducibility.

Such ethical considerations are context sensitive, and can only be applied piecemeal. In a similar vein, Leah McClimans argues that uniformity is not always an appropriate goal for designing questionnaires, as the open-endedness of questions is often both unavoidable and desirable for obtaining relevant information from subjects.

In such cases, small changes to the design of a questionnaire or the analysis of its results may result in significant harms or benefits to patients McClimans ; Stegenga , Chap. These insights highlight the value-laden and contextual nature of the measurement of mental and social phenomena. Rather than emphasizing the mathematical foundations, metaphysics or semantics of measurement, philosophical work in recent years tends to focus on the presuppositions and inferential patterns involved in concrete practices of measurement, and on the historical, social and material dimensions of measuring.

In the broadest sense, the epistemology of measurement is the study of the relationships between measurement and knowledge. Central topics that fall under the purview of the epistemology of measurement include the conditions under which measurement produces knowledge; the content, scope, justification and limits of such knowledge; the reasons why particular methodologies of measurement and standardization succeed or fail in supporting particular knowledge claims, and the relationships between measurement and other knowledge-producing activities such as observation, theorizing, experimentation, modelling and calculation.

In pursuing these objectives, philosophers are drawing on the work of historians and sociologists of science, who have been investigating measurement practices for a longer period Wise and Smith ; Latour Ch. The following subsections survey some of the topics discussed in this burgeoning body of literature. A topic that has attracted considerable philosophical attention in recent years is the selection and improvement of measurement standards.

Generally speaking, to standardize a quantity concept is to prescribe a determinate way in which that concept is to be applied to concrete particulars. This duality in meaning reflects the dual nature of standardization, which involves both abstract and concrete aspects.

In Section 4 it was noted that standardization involves choices among nontrivial alternatives, such as the choice among different thermometric fluids or among different ways of marking equal duration. Appealing to theory to decide which standard is more accurate would be circular, since the theory cannot be determinately applied to particulars prior to a choice of measurement standard.

A drawback of this solution is that it supposes that choices of measurement standard are arbitrary and static, whereas in actual practice measurement standards tend to be chosen based on empirical considerations and are eventually improved or replaced with standards that are deemed more accurate.

A new strand of writing on the problem of coordination has emerged in recent years, consisting most notably of the works of Hasok Chang , , ; Barwich and Chang and Bas van Fraassen Ch.

These works take a historical and coherentist approach to the problem. Rather than attempting to avoid the problem of circularity completely, as their predecessors did, they set out to show that the circularity is not vicious. Chang argues that constructing a quantity-concept and standardizing its measurement are co-dependent and iterative tasks. The pre-scientific concept of temperature, for example, was associated with crude and ambiguous methods of ordering objects from hot to cold.

Thermoscopes, and eventually thermometers, helped modify the original concept and made it more precise. With each such iteration the quantity concept was re-coordinated to a more stable set of standards, which in turn allowed theoretical predictions to be tested more precisely, facilitating the subsequent development of theory and the construction of more stable standards, and so on.

From either vantage point, coordination succeeds because it increases coherence among elements of theory and instrumentation. It is only when one adopts a foundationalist view and attempts to find a starting point for coordination free of presupposition that this historical process erroneously appears to lack epistemic justification The new literature on coordination shifts the emphasis of the discussion from the definitions of quantity-terms to the realizations of those definitions.

JCGM 5. Examples of metrological realizations are the official prototypes of the kilogram and the cesium fountain clocks used to standardize the second. Recent studies suggest that the methods used to design, maintain and compare realizations have a direct bearing on the practical application of concepts of quantity, unit and scale, no less than the definitions of those concepts Riordan ; Tal The relationship between the definition and realizations of a unit becomes especially complex when the definition is stated in theoretical terms.

Several of the base units of the International System SI — including the meter, kilogram, ampere, kelvin and mole — are no longer defined by reference to any specific kind of physical system, but by fixing the numerical value of a fundamental physical constant. The kilogram, for example, was redefined in as the unit of mass such that the numerical value of the Planck constant is exactly 6.

Realizing the kilogram under this definition is a highly theory-laden task. The study of the practical realization of such units has shed new light on the evolving relationships between measurement and theory Tal ; de Courtenay et al ; Wolff b. As already discussed above Sections 7 and 8. On the historical side, the development of theory and measurement proceeds through iterative and mutual refinements. On the conceptual side, the specification of measurement procedures shapes the empirical content of theoretical concepts, while theory provides a systematic interpretation for the indications of measuring instruments.

This interdependence of measurement and theory may seem like a threat to the evidential role that measurement is supposed to play in the scientific enterprise. After all, measurement outcomes are thought to be able to test theoretical hypotheses, and this seems to require some degree of independence of measurement from theory. This threat is especially clear when the theoretical hypothesis being tested is already presupposed as part of the model of the measuring instrument.

To cite an example from Franklin et al. There would seem to be, at first glance, a vicious circularity if one were to use a mercury thermometer to measure the temperature of objects as part of an experiment to test whether or not objects expand as their temperature increases. Nonetheless, Franklin et al. The mercury thermometer could be calibrated against another thermometer whose principle of operation does not presuppose the law of thermal expansion, such as a constant-volume gas thermometer, thereby establishing the reliability of the mercury thermometer on independent grounds.

To put the point more generally, in the context of local hypothesis-testing the threat of circularity can usually be avoided by appealing to other kinds of instruments and other parts of theory. A different sort of worry about the evidential function of measurement arises on the global scale, when the testing of entire theories is concerned.

As Thomas Kuhn argues, scientific theories are usually accepted long before quantitative methods for testing them become available. The reliability of newly introduced measurement methods is typically tested against the predictions of the theory rather than the other way around. Hence, Kuhn argues, the function of measurement in the physical sciences is not to test the theory but to apply it with increasing scope and precision, and eventually to allow persistent anomalies to surface that would precipitate the next crisis and scientific revolution.

Note that Kuhn is not claiming that measurement has no evidential role to play in science. The theory-ladenness of measurement was correctly perceived as a threat to the possibility of a clear demarcation between the two languages.

Contemporary discussions, by contrast, no longer present theory-ladenness as an epistemological threat but take for granted that some level of theory-ladenness is a prerequisite for measurements to have any evidential power. Without some minimal substantive assumptions about the quantity being measured, such as its amenability to manipulation and its relations to other quantities, it would be impossible to interpret the indications of measuring instruments and hence impossible to ascertain the evidential relevance of those indications.

This point was already made by Pierre Duhem —6; see also Carrier 9— Moreover, contemporary authors emphasize that theoretical assumptions play crucial roles in correcting for measurement errors and evaluating measurement uncertainties. Indeed, physical measurement procedures become more accurate when the model underlying them is de-idealized, a process which involves increasing the theoretical richness of the model Tal This problem is especially clear when one attempts to account for the increasing use of computational methods for performing tasks that were traditionally accomplished by measuring instruments.

As Margaret Morrison and Wendy Parker argue, there are cases where reliable quantitative information is gathered about a target system with the aid of a computer simulation, but in a manner that satisfies some of the central desiderata for measurement such as being empirically grounded and backward-looking see also Lusk Such information does not rely on signals transmitted from the particular object of interest to the instrument, but on the use of theoretical and statistical models to process empirical data about related objects.

For example, data assimilation methods are customarily used to estimate past atmospheric temperatures in regions where thermometer readings are not available.

These estimations are then used in various ways, including as data for evaluating forward-looking climate models. Two key aspects of the reliability of measurement outcomes are accuracy and precision. Consider a series of repeated weight measurements performed on a particular object with an equal-arms balance.

JCGM 2. Though intuitive, the error-based way of carving the distinction raises an epistemological difficulty. It is commonly thought that the exact true values of most quantities of interest to science are unknowable, at least when those quantities are measured on continuous scales.



0コメント

  • 1000 / 1000