Abstract
This paper follows the journey of two small fluid units throughout the nineteenth century in Anglo-American medicine and pharmacy, explaining how the non-uniform “drop” survived while the standardized minim became obsolete. I emphasize two roles these units needed to fulfill: that of a physical measuring device, and that of a rhetorical communication device. First, I discuss the challenges unique to measuring small amounts of fluid, outlining how the modern medicine dropper developed out of an effort to resolve problems with the “minimometer,” which measured minims. Second, I explain how drops, utilized in “the open drop method” of administering general anesthesia, effectively communicated a gradual process and epistemically valuable heuristic to the audience of practitioners, whose attention to individual medical outcomes was important for verifying the proper dosage. The standardized minim was never able to achieve success as the drop’s intended replacement; the non-uniform drop better served the relevant epistemic goals within the practical contexts for which these units were designed. The surprising historical trajectory of drops should cause us to question the common equivocation of “standardization” with “progress” in the history and philosophy of measurement. This study also exemplifies how examining non-standard measurement practices can be instructive for better understanding the role and function of standardization within epistemology of measurement.
1. Introduction
In medical sciences, we utilize standardized units (such as mL, cc, etc.) to achieve rigor and consistency between our medical studies and the dosages they determine. However, one of the most commonly used units for administering medication is non-uniform: the “drop.” Unlike most units, such as milliliters, one drop is not always equal to another (and, in fact, we expect that they are never exactly equal). Yet, drops are used in many medical contexts, from eye and ear medications, to the continuous IV drip, to infant oral vaccines.1 We may wonder why such a seemingly unscientific and primitive measure is still used in modern medicine. Even more puzzling is how this unit made it through the nineteenth century, which was characterized by sweeping metrological reforms which abolished scores of ancient and variable units.
The very existence of the drop as a unit in modern medicine runs against our intuitions for the historical trajectory of metrological progress: Primitive, variable units fall by the wayside, and are replaced with standardized, uniform, metrologically sound units which are reliably consistent, across individual uses and across devices. This is how we are used to telling the history of units in metrology: survival of the standard-est, if you will. The history of the drop, however, stands against this narrative. The drop did not survive merely out of a lack of competition. Indeed, a replacement unit, the “minim,” was introduced early in the nineteenth century as an appropriate, uniform substitute for the variable drop. Yet, the “minim” is now completely unknown to modern medical practice, and its history erased from metrological memory. The story behind the drop’s continued use and relevance, and the minim’s gradual fading from use and discourse, is enlightening as to the true triumphs and the true shortcomings of standardization in science and medicine. Uniform standardization of quantities, rather than being a prerequisite, can actually stand in the way of achieving reliable measuring procedures, especially those which require incorporating feedback from observation in real time.
We can understand this history best, I argue, by acknowledging two roles that a unit must fulfill, as both a physical tool and a rhetorical one. These two roles also present two challenges for unit standardization: (1) the unit must be practically producible as a physical manifestation of an abstract ideal, in the contexts relevant to its intended use, and (2) the unit must serve as a tool of communication for relevant audiences. To illustrate how the uniform “minim” fails to meet both challenges, this paper focuses primarily on issues of medical dosage in British and American medicine, the context in which the minim is first introduced and debated.
Accordingly, the first part of this paper traces the journey of the drop and the minim as physical units, as well as the devices developed to produce them. Along the way, I discuss the problems practitioners faced in producing reliable measures of small amounts of fluid, as well as the failure of proposed solutions. I show how, particularly in the context of medical dosage, attempts to improve the realization of the uniform minim actually contributed to the eventual viability of the non-uniform drop.
Key to understanding why the non-uniform drop triumphed over the standardized minim is a philosophical investigation of the knowledge claims for which they were each used, and the audiences to whom those knowledge claims communicated. The second half of this paper is dedicated to close investigation of the case of general anesthesia, with special attention to Alice Magaw’s use of the “open drop method,” a gradual and continuous administration technique. The failures of the minim, and its nominal precision, to secure patient safety shed light on the role of the “drop” as a rhetorical tool for communicating an administration process. The use of drops enabled practitioners to adopt a heuristic which directed attention to relevant information, and obscured information which had received undue attention, to achieve the kind of precision which was relevant to the goals at hand: saving patient lives.
These episodes, concerning the devices, challenges, and discourse surrounding these units, will illustrate how developments in medical technologies, as well as in the practice of medicine, resulted in creating an epistemic role for the drop to fulfill for centuries to come, while these same developments rendered the minim obsolete.2
2. The Fall of the Drop, the Rise of the Minim
In early nineteenth century England, it looked as though the future of the “drop,” as a unit of volume in science and medicine, was grim indeed. Particularly in the wake of the French Revolution and the wave of reforms to measurement standards that followed, English scientists and medical practitioners felt the pressure to refine and simplify their own system of units, which had long been the cause of much confusion and miscommunication. At the turn of the nineteenth century, there was still no single standard weight system in England, but rather multiple sets of units used by practitioners and manufacturers, depending on one’s occupation and whether the unit was being used in commerce or for medicinal recipes.
Accordingly, the “drop” was another unit term which was problematic for its ambiguity and indiscriminate use, and thus became subject to the scythe of nineteenth-century standardization. The Royal College of Physicians of London considered the confusion caused by fluid measures (such as the drop) one of the main reasons for revising its Pharmacopoeia of medicinal recipes in 1809, a publication which was particularly influential among British and American apothecaries. Since its inception in 1618, The Pharmacopoeia of the Royal College of Physicians of London only underwent major revisions every few decades (or more). The 1809 volume was carefully compiled and edited with input from members of the Royal College of the Physicians of London (henceforth, RCPL), Royal College of Surgeons, and Society of Apothecaries, and others with related expertise. The preface explains major changes from previous editions, such as the adoption of French chemical nomenclature where appropriate, to keep pace with advancements in chemistry and botany. Rather than push for adopting French units as well, the Pharmacopoeia sought to straighten out the current confusion of English weights and measures; the preface pays special attention to the smallest fluid unit, decrying drops as “inaccurate, and influenced by a variety of circumstances” (RCPL 1809; p. iv–vi, xix). In the first chapter, drops are even more thoroughly denounced as a unit for medicines:
It may be further objected to the use of drops; that their bulk is influenced by the quantity of liquid contained in the bottle from which they fall, by the thickness of the lip of the same bottle, and even by the inequalities in the different parts of the lip of the same bottle, that volatile liquids to which this mode is most commonly applied, are thus exposed with extensive surfaces and their evaporation promoted; on every account the adoption of some decisive, convenient, and uniform substitute becomes necessary (Royal College of Physicians of London 1809 [RCPL], p. 5).
…when liquids of less specific gravity [than water] are used, a much larger number [of drops] is required to fill the same measure: thus of proof spirit 140 drops are required to equal the bulk of sixty of water dropped from the same vessel. If…it was generally assumed that sixty drops were equal to one fluidrachm, and one fluidrachm of tincture of opium was substituted for sixty drops prescribed, twice the dose intended would be given… (RCPL 1809, p. 4–5).3
…we measure the smaller divisions of liquids by a glass measure marked at equal distances; for the number of drops is a fallacious and uncertain mode of division, since compared with those of water, almost double the number of drops of any tincture are required to fill the same measure… (RCPL 1809, p. 4).
Note that, when first introducing the minim, the Pharmacopoeia refers to this addition to the system of measures in terms of the measuring device, not the unit itself: “The College have adopted the glass measures originally invented by the late Mr. Timothy Lane, F. R. S.,” which partition the standard wine gallon into “minims” (RCPL 1809, p. 6). In the back of the Pharmacopoeia, we find plates which depict several graduated measures, featuring a graduate of a fluidrachm in terms of minims (“Fig. 7”) as well as the tube for measuring minims (“Fig. 8”) [Fig. 2].
The minim-tube and its operation is described, in detail, thus:
A small tube, open at both ends, and graduated from five minims down to one. This being inserted into any liquid, to the level of the mark of the quantity required, and the superior end then closed by the finger, will retain that quantity on its removal, and thus allow of the mensuration of the smallest division of the wine gallon which is required (RCPL 1809, p. 454).
Today, the minim is entirely absent from pharmaceutical and medical practice, aside from some of the oldest syringes in farthest corners of hospital storage rooms, relics of the past which have led to confusion at best and tragic overdose at worst (Davis 1993, p. 18). In contrast, the drop has since been reintroduced into the list of conventional conversions in medical literature as a twentieth of a mL. Although this is merely a nominal position within the system of units, with little bearing on the empirical reality of drop size, it offers a convenient approximate conversion which fits nicely into the larger system, despite being necessarily fictional. Empirically speaking, standard medicine droppers rarely deposit drops of this size, even if this definition is restricted to water. In my own experiments with standard medicine droppers, the average drop of water, depending on method of delivery (e.g., angle of dropper, temperature of water), ranges from 0.027 to 0.043 mL, rather than 0.050 mL. Furthermore, theoretically speaking, no single conversion from drops of fluid to standard fluid units is possible. Different liquids (even with other factors held constant) will have different drop volumes due to the difference in surface tension across liquids (this point will become clearer in the next section, where Tate’s Law is discussed). While on paper, drops are now accepted into the series of metric conversions, keeping their name, in physical reality the drop is a non-metricated unit which stands on its own, outside of any rigorous conversion to any other units.
Although philosophers and historians of science and medicine have paid little attention to the journey of the drop and the minim, this episode has not entirely gone unnoticed. It earns a paragraph from historians J. K. Crellin and J. R. Scott, in their paper on the increasing popularity of pre-mixed medications (as opposed to powders which are then diluted at home) approaching the nineteenth century. They note in passing that, in light of the “undoubted popularity of the minim it is surprising that the drop survived until well into this century” (1970, p. 146). Crellin and Scott briefly mention a few possible explanations for the persistence of the drop, such as convenience, common confusion about terminology, or the influence of William H. Martindale, an influential pharmacist.5 Historical evidence points instead to a more complex explanation, relating to both practical and epistemic issues unique to measuring small amounts of liquid. As we will soon see, minims and minim-devices were also problematic in their own respect. No sooner had the 1809 Pharmacopoeia established the minim as a reliable, standard fluid unit than critique arose to meet this proposed solution to the problems of variability.
3. The Minim and the Minimometer: A Critique and Alternative Proposal
The precision and reliability of the smallest fluid unit was not only of concern for uses in medicine, but also more generally for uses in commerce and trade. This is signaled by a promptly published critique of the minim and “minimometer” in the Tradesman; or, Commercial Magazine in 1810, just months after the Pharmacopoeia established the minim as the recommended standard fluid unit. In an article titled “On an Exact Admeasurement of Fluids,” the author (signed simply “Philo”) directly references the discussion of drops and minims in the Pharmacopoeia’s 1809 edition. While the article admits the issues of drop variability noted by the Pharmacopoeia, it goes on to describe the minim and the minimometer (also referred to here as the stagonometer, from the Greek stagon, meaning “drop”) as no less problematic:
In the next place, it will occasion considerable waste and trouble, requiring always that the external adherent layer of fluid be wiped off the tube to prevent inaccuracy, and that the bottle, into which it is to be immerged [sic], contain a depth of fluid equal to the height required in the stagonometer. Lastly, it will not be so accurate as might be wished, in consequence of the inherent stratum of fluid lining the internal surface of the tube, which, from the diameter being necessarily very small, will bear a considerable proportion to the fluid discharged (Philo 1810, p. 318) [emphasis mine].
…a graduated tube, open at both ends, which, when immersed in the fluid to the necessary depth, is to be withdrawn, by placing the finger on the uper [sic] orifice, by which means a column of fluid is suspended equal in length to the portion of the tube immersed… (Philo 1810, p. 318).
Next, Philo makes recommendations for an improved device, a set of glass tubes. These tubes would be “…of a convenient form and diameter (so as readily to admit the fluid to be poured in) open at each end…,” “…drawn to a point at the end…,” “…in which there should be an orifice sufficient to permit the escape of one drop only at a time…,” “…or…a set of phials, each having an angular spout; the acuteness of the angle regulating in this case the size of the drop, as the smallness of the opening, and the tapering of the point, determine it in the other” (Philo 1810, p. 319). To our ears, the devices that Philo proposes sound quite similar to the glass portion of modern medicine droppers, without the rubber bulb.7 In the end, Philo’s argument amounts to this: Graduated measures for minims are problematic in their own regard, and should not be adopted; instead, it is a better idea to improve the uniformity of the drop.
To better understand the context of Philo’s proposal, we remind ourselves that before the invention of the dropper, drops were dispensed by tipping a bottle or a vial until a drop was released from the edge. In 1808, Dr. C. Shuttleworth, an English surgeon, published an influential pamphlet on his experiments comparing the weights of different drops of fluid, in order to see how many drops of different fluids would equal the same weight in grains (60 gr.). To his surprise, he found that they varied widely; while 60 drops of arsenic weighed only slightly more than 60 grains, it took more than 134 drops of laudanum to achieve the same weight (Shuttleworth 1808, p. 11).
In the process of conducting this study, Shuttleworth realized that the graduated “drop-measure” that had been accepted at Apothecary’s Hall [Fig. 3] did nothing to ensure uniformity of individual drops (Shuttleworth 1808, p. 8). Since this “drop-measure” proved unhelpful for performing his intended comparisons, he first needed to “adopt some standard” (Shuttleworth 1808, p. 10). Thus, the first part of his experimental effort was spent on identification of certain vessels ideal for achieving the 60 drops-per-fluidrachm rate when drops fell from the edge. He pursued this with obsessive exactitude, going as far as to mark the portion of the lip on the vessel which led to the ideal drop volume, because he found that drops varied greatly depending on the shape of lip of the vessel, independent of the size of the vessel or the nature of the liquid (pp. 9–10). This pamphlet influenced not only the 1809 Pharmacopoeia to take the problem of drops seriously, but also influenced some American pharmacists, who decades later credited Shuttleworth as the earliest effort to standardize the drop (RCPL 1809, p. 7; Durand 1829, p. 597; Milne-Edwards and Vavasseur 1829, p. 44).
Another important addition would come in 1852 when John Abraham, a well-known druggist and chemist in Liverpool, added a rubber bulb as a modification of the “minim-meter” (Abraham 1852, p. 268). Now, fluid could be sucked in, rather than needing for the tube to be submerged to a certain level in a container of fluid. This also relieved the practitioners from needing to create suction with one’s thumb, steadying the conditions under which liquid could be dispensed. Steady conditions, and a narrow spout, allowed Tate’s equation to have relevance: the propulsion of air from the rubber bulb pushes the drop to grow towards its maximum volume at the end of the orifice, according to the ability of the liquid’s surface tension to fight the force of gravity times the increasing mass. Yet, if we look closely at Tate’s Law, we see that drop volume will still vary from one liquid to another, given that different liquids have differences in surface tension.
This aside, at the start of the nineteenth century there were well-known and well-founded arguments against the drops as too variable and unwieldy, particularly while they were still deposited from the side of bottles and vials which themselves varied greatly. Yet also, minims and minimometers suffered from their own set of problems. Some of these problems were practical concerns; the operation of the minimometer required that the container of liquid be tall enough (and wide enough) for the minimometer to be inserted at the proper level. Additionally, to maintain accuracy, the minimometer needed to be cleaned and dried after each use. Other problems were epistemic; such small amounts of liquid were subject to significant alteration from contacting the glass, due to the liquid’s adhesion to the inside surface, causing a residual loss of an indeterminate amount.
In its struggle to successfully embody its promised precision and achieve status as a standard unit, the minim is not unique in the history of nineteenth century science. Bruce Hunt, in his enlightening historical account of the predecessors to the standard ohm, describes Wilhelm Weber’s attempt to establish his own unit for electrical resistance in the 1850s: the “meter-per-second.” Much like the writers of the 1809 Pharmacopoeia wished to reform the drop into a standard unit which fit neatly into a table of conversions, Weber’s “meter-per-second” was tied fundamentally to his units of work and energy, offering an overall systemic simplicity. Yet, Weber’s system required careful use of “a delicate measurement with special apparatus to determine the resistance of a given wire in terms of a velocity,” in units which, while systemically simple and elegant, were far too small to be of practical use to telegraph engineers (Hunt 1994, p. 55). Even when this delicate measuring process was successfully accomplished, its determinations differed by several percent due to the sensitivity of the process of measurement itself. Thus, Weber’s meter-per-second failed to be taken seriously by telegraph engineers, as it was “difficult to explain in an elementary way how its units were defined,” and “even more difficult to embody them accurately in material standards” (Hunt 1994, p. 55). Other proposed units of resistance had been defined in terms of a particular length of wire or tubing which could be held in the hand, easily comprehended, and reproduced. Although a theoretically precise standard unit, Weber’s resistance unit ultimately failed to meet the challenge it faced in its attempt to physically realize its promised precision and prove useful as a way to communicate information among engineers. Eventually, the standard ohm did emerge from the many competitors as a standard unit for electrical resistance which was both physically realizable for its intended purposes and useful for communicating to necessary audiences.
Likewise, even given physical and epistemic issues which the minimometer faced in its early development, we would expect that these were issues that would soon be overcome, just as in the case of the ohm. Surely, liquid measuring devices for minims would catch up to the metrological rigor that the smallest fluid unit demands, and an improved minimometer would render the drop obsolete. Instead, what we find is that drops and minims both remain in use throughout the nineteenth century, both in use and discourse. The reasons for the proliferation of both units pertain not only to the extent that they successfully fulfilled their role as physical realizations in the context of crucial measuring tasks, but also the rhetorical role the units played as tools of communication within this context.
4. The Minim and the Drop, Side-by-side in the Nineteenth Century
By looking at advertisements in the British Medical Journal in the late nineteenth and early twentieth century, it would seem that over the course of the nineteenth century standardization and uniformity finally won out, with the minim replacing the unorthodox and antiquated drop. Advertisements for medication in the backmatter of the 1894 BMJ feature dosages in minims, and not in drops [Fig. 4].
As seen in the images below, the number of minims advertised in dosages ranged widely, from 1-minim doses to 30-minim doses (half a fluidrachm) (“Back Matter” 1894, p. 35). Perhaps particularly telling is the example below [Fig. 5], under the title “Pumiline”: the dosage listed for “Pumiline essence” is “2 to 5 minims” (p. 39).
The fact that an approximate dosage (“2 to 5 minims”) is stated in terms of minims, the uniform unit which was adopted for reasons of its exactitude, informs us of the overwhelming status the minim appears to have attained by the end of the nineteenth century. Even when precision is clearly unnecessary, minims are the units of choice.
Yet, I argue that such references to the minim, especially in instances where precision is not a concern (such as an approximate dosage range), do not signal a sea-change in which unit was used in practice, but actually quite the opposite. Throughout the nineteenth century, minims were becoming tools of rhetoric, a dialect for speaking of “a small fluid unit” in a way which carried an air of rigor and precision, and a way to communicate that a larger amount (such as 30 minims, which could have been listed as a “half fluidrachm”) had been carefully determined. As Ted Porter noted in his seminal Trust in Numbers, late nineteenth century drug manufacturers were aware that their medicaments were prone to vary in concentration, due to difficulties standardizing organic ingredients as well as the animals/conditions which hoped to ensure a uniform therapeutic effect (Porter 1995, p. 29). This instance of a precise unit used in a case when precision is not even warranted is an interesting exemplar of Porter’s broader points about quantification and standardization as guarantors of efficacy between disparate parties. In that light, units chosen for advertising therapeutic drug dosages may be seen as part of a rhetorical strategy to establish trust between distributors/consumers and manufacturers, despite the inherent non-uniformity of the drugs themselves. This view of the minim as being gradually reduced to a rhetorical tool during the nineteenth century, rather than replacing the drop, is bolstered by clear evidence that both minims and drops continue to coexist throughout the nineteenth century. Indeed, as we will soon see, the medical articles within the BMJ itself seem to run against what we would conclude from only looking at its back matter.
The first indication that not all controversy is resolved for the smallest of liquid units is the seminal Weights and Measures Act of 1824. This act originated the system of Imperial units which is still used for some purposes in England today (and, by extension, the US customary system, much of which was derived from the Imperial system). The scythe of standardization was sharp in the 1824 Act, paring down the confusing cornucopia of units which had remained in circulation. Nearly all previous laws regarding weights and measures were repealed in this act, even going back to ancient measures. However, drops are never mentioned in this statute, and―perhaps more importantly―neither are minims. No fluid unit smaller than a fluidrachm warrants mention in this legislation, possibly suggesting that consensus had yet to be reached regarding issues with the drop and with the minimometer (Great Britain 1824, pp. 637–51).
Additionally, we do not find, in the medical and practical literature, that the minim is the obvious winner over the drop. In Arnold James Cooley’s sizeable A Cyclopaedia of Six Thousand Practical Receipts: And Collateral Information in the Arts, Manufactures, and Trades, Including Medicine, Pharmacy, and Domestic Economy, the key below appears in the front matter of the book (1845) [Fig. 6].
Searching the text, I found 49 instances of “drop” used specifically as a unit (not including uses of “drop” as a verb, or as an indeterminate amount). Contrastingly, I could not find a single instance of the use of “minim” as a unit (though it is used once to describe an indeterminate minimal amount). Any reader who did not specifically search out the definition of the drop in the front matter, as seen above, would have no reason not to assume that instances of the word “drop” throughout the text meant exactly that: physical drops of liquid. Given that only drops appear in the text, it is difficult to believe that the medical recipes in this text were actually constructed using a minimometer, nor does it seem reasonable that the author truly expects that the readers will be using a minimometer in their use of the recipes. It seems that this suggested swapping of the unit “minim” whenever instances of “drop” appear is simply lip-service to a sense of rigor and propriety. It is worth noting that, throughout the six editions of Cooley’s Cyclopaedia, the last of which was published in 1880, this nomenclature convention remained unchanged.9
In medical literature as well, we find that drops endure. In the 1871,BMJ article, “Reports of Medical and Surgical Practice in the Hospitals of Great Britain,” surgeon Thomas R. Jessop and physician William Stokes include a dosage of 1–2 drops, as well as three references to dosages in minims (pp. 36–7). Considering that both units are used in this same article, drops clearly continue to play a role in both practical use and discourse, and not from mere inertia. More than a decade later, in another BMJ article, “Reports to the Scientific Grants Committee of the British Medical Association,” drops are referenced in a report on inoculating rodents with putrid blood (Dowdeswell et al. 1884, pp. 101–18). This is likely due to the obvious difficulties of gathering and measuring blood in a minimometer, and the relative convenience and appropriateness of drops for the purpose of inoculation. Lastly, in “A Report on the Physiological Action of Chloroform,” an 1893 article in the BMJ, the authors tested the effects of chloroform doses injected in exact numbers of minims, but also included two observations in terms of drops: “…the respirations are increased in depth and frequency. Upon the application of a second drop no movements of the animal take place as a rule, but the respiration is temporarily quickened and increased” (Gaskell and Shore 1893, p. 111).
The use of drops also proliferated in the United States, as evidenced in a review by American mathematician Charles F. Himes, which summarized a series of articles on the variance of drop volume in six American pharmaceutical articles (as well as an English and a German one) ranging from 1830 to 1880 (Himes 1883a [July], pp. 57–60; Himes 1883b [August] (reprint), pp. 394–97). The earliest of these articles experimented with the volume drops of different fluids under various conditions, with the goal of denouncing the use of drops in pharmacy (particularly when treated as being synonymous with the minim) (Durand 1830, pp. 165–71). The more recent articles had the goal of simply informing the reader about the variations in drop volume and the need to regulate certain conditions in the use of drops. It appears as if, over time, it was treated as an inevitability (or necessity) that drops would in fact be used despite their flaws. In 1895, in an article in the Bulletin of Pharmacy, calculations and recommendations on improved vessels for the formation of consistent drops of the proper size were offered, as well as tables listing the size of drops as dependent on diameter of pipette and on the type of liquid being dropped (Eschbaum 1895, pp. 461–62).
To fully uncover why drops continued to coexist alongside the minim, we should consider not only the failure of the minim and minimometer to overcome physical and epistemic limitations, but also the ways in which drops succeeded. Some early clues to the practical and epistemic virtues of drops can be seen even within the very same document which first denounced their use and introduced the minim.
Recalling the 1809 Pharmacopoeia’s scathing indictment of “drops”―as an inconsistent and untenable fluid unit which must be stricken not only from formal use in devices, conversion tables, and medicine recipes, but even expunged from all future metrological vocabulary―one may be surprised to find that in the same Pharmacopoeia itself, drops are used as a unit several times. One instance is in terms of dosage, referencing the efficacy of 20 drops of tincture of opium as being just as effective as 25 drops (RCPL 1809, p. 250). It should be noted that the Pharmacopoeia, though a sizeable reference document, was thoroughly compiled and edited over the course of several years with input from members of the Royal College of the Physicians of London, Royal College of Surgeons, and Society of Apothecaries, and others with related expertise (pp. iv–vi).10 The inclusion of drops as a unit of dosage was unlikely to be an editorial oversight.
A similar instance can be found in the The Pharmacopoeia of the United States of America, which primarily used minims, but also directs the reader to use “fifteen drops” of alcohol in its recipe for mercurial liniment, in addition to two other references of “drop” referring to an approximate process of adding a small amount (United States Pharmacopoeial Convention et al. 1830; pp. 81, 112, 150). Similarly, drops are also referenced in the English 1809 Pharmacopoeia in instructions for the composition of medicines: “admixture of a few drops of the essential oil with water and shaking them together…” (RCPL 1809, p. 167). Most other instances of the use of drops in Pharmacopoeia involve titration, appearing in instructions for gradually adding a liquid, sometimes a drop at a time, until a chemical reaction occurs: “The division of the camphor by trituration, with a few drops of spirit, facilitates the solution…” and elsewhere, “…drop in as much muriatic acid as may be necessary to precipitate the sulphur…” (RCPL 1809; pp. 219, 97).11
Notice that these last examples utilize the “drop” not only to describe the unit of input, but also to communicate the process of dispensing. This emphasizes the role that units play beyond mere abstract entities; they must be realized by a physical method which takes place in space and time. In a process like titration, where an observable result will indicate when a certain threshold has been reached, simply being able to add as little as possible as conveniently as possible is far more important than knowing that each additional unit being added is uniform. For such measurement problems, the output (which results from adding the amount), rather than the exact input, may be more important for helping a practitioner determine the correct amount for the problem at hand, or rather, in determining the amount as correct for the problem at hand.
Additionally, an approximative unit such as the drop, which black-boxes the input (such as the exact amount of liquid added), can help draw more attention to the output (in this case, a chemical reaction). Drops provided a way to communicate an epistemically advantageous heuristic, allowing a practitioner to attend to the results of their process and quickly observe when they have arrived at the correct amount, by spending less time and attention on ensuring a precise unit of input.
Knowledge of exact input would be more important to obtain for a process which one expects to be able to repeat again with the same number of units in the future. Yet, this is only possible once standardized materials with reliably uniform concentrations have been achieved. Such was not the case for medicament recipes which used organically derived ingredients; as previously noted, the therapeutic effects of such drugs were extremely difficult to standardize throughout the nineteenth century, due to variance in both the materials themselves and the animal models which were used to determine the concentration of a given batch (Porter 1995, pp. 29–30).
The second half of this paper will discuss a similarly peculiar measuring problem, wherein a particular threshold was needed to be precisely met and not exceeded. The development of safe administration of general anesthesia aptly illustrates the practical and epistemic obstacles to achieving a successful standard unit, as needing to fulfill two roles. One role that units play is as an idealization which must be realizable by a physical apparatus, the difficulties of which have occupied the bulk of this paper, but will also have particular procedural consequences in the context of general anesthesia methodology. An additional role that units play is as tools of communication, for conveying knowledge claims to different audiences with particular purposes in mind. In the following section, I explain two audiences which a practitioner or manufacturer may have in mind in the composition, recommendation, administration, or selling of medication. The following sections will show how thinking of units as a form of communication can shed light on how the institutionally-authorized minim did not manage to erase the use of drops in the nineteenth century, neither in practice nor in literature.
5. Units as Tools of Communication: The Discursive Audience and the Practical Audience
When developing a knowledge claim, such as in an experiment to determine dosage, there are two main audiences a practitioner can have in mind. The formation of the knowledge claim will be influenced by which audiences are most relevant, because the “language” of devices, units, and procedures that is chosen must be able to communicate with that audience. I will call these two audiences the “discursive audience” and the “practical audience.”
The discursive audience is the audience of the written and spoken word. This is the audience which one might have in mind when writing up results for publication, for review by peers. This is also the audience which practitioners face if they should cause accidental death and must bring forth an oral or written record of their actions. Considerations which may come in to play regarding this audience would be one’s results being taken seriously, one’s reputation as an expert, and one’s legal safety. Simplicity, rigor, and precision are epistemic values which would influence the choice of instruments, units, and procedures, with a discursive audience in mind. In short, this is the audience who wants to know, “What happened?” Communication of results to this audience operates as an accounting mechanism; the record of events is sufficient, whereas the procedure may be more or less irrelevant.
The practical audience is the audience of practice. This is the hands-on audience one would have in mind when communicating how to administer (or replicate) the results given by the knowledge claim. The practitioner herself is part of this audience, and thus the desire to communicate to oneself likewise influences the devices, units, and procedures which are chosen in the formation of knowledge claims intended for the practical audience. The kinds of concern which come into play for the practical audience would be accessibility of all materials and measures, careful communication of important qualitative features, and clear instruction on the order of operations. This audience wants to know, “How can I make it happen?” Communication to this audience is focused primarily on procedure. The goal here would be to make all tacit knowledge as un-tacit as possible. And, most of all, to draw attention to what requires attention; here, precision is only as good as it is useful.
These audiences may overlap. For example, another practitioner may read a publication, receive discursive communication that convinces her that the research is serious and worth undertaking, and is thus inspired to go and try the experiment herself in practice. With this overlap in mind, an author may wish to convey “what happened” in such a way as to also provide instructions as to “how to make it happen.” My intention is not to neatly divide these audiences, but rather to point out that there are two possible levels of communication at work, and both influence the practitioner’s choice of tools and descriptions used in the formation and communication of a knowledge claim (and, as previously noted, a unit operates both as a tool and a description). When both audiences are in mind, there may be moments where the import of one audience supersedes the other.12
In particular, the development of safe general anesthesia methods brings the implications of this difference in audiences into greater relief, showing how the drop communicated to a practicing audience, at a time when medicine was taking a major turn towards patient-centric, rather than practitioner-centric, medical practices based on evidence from patient outcomes.
6. The Problem of General Anesthesia
The problems faced when inducing general anesthesia prior to surgery in the late nineteenth century offers an interesting comparison of the benefits and limitations of using minims for dosage versus drops. While chloroform had been in commercial use as an anesthetic since the 1850s, it was notoriously dangerous and the proper dosage extremely sensitive to the particular patient, leading to many deaths owed to the anesthetic rather than the actual surgery. Even after two decades since its introduction into medical practice, the chloroform problem remained a confounding one. Precision of the amount administered, the British Medical Journal observed in 1869, could not ensure the safety of the patient from the dangers of chloroform overdose:
The use of the minim measure as a means of security must also be put quite aside. Those who have been accustomed to pour upon a handkerchief as much chloroform as seemed necessary, and then hold it closer or less close, according to judgment, must often have experienced melancholy amusement in noting the piteous way in which, at an inquest, the exhibitor will inform the coroner that he had “given only forty-five minims by measurement” (“Chloroform Accidents” 1869, p. 590) [emphasis mine].
Such precision sounds very fairly on such an occasion, but we fear that it conduces nothing to the patient’s safety. This and some and other fanciful precautions always look to us as if the exhibitor were making preparations for the witness box, and remind us of the ready but impolite reply once made to a cross-examining counsel, who, asking the reason of his witness’s minute accuracy as to distances, was told, “I thought some fool might ask me, and so I measured them.” We are even in doubt whether the tenor of this remark does not also apply to the custom of holding the patient’s pulse (“Chloroform Accidents” 1869, p. 590) [emphasis mine].
For practitioners who saw the method as likely to threaten the lives of their patients (and perhaps their reputations by extension), chemically-induced general anesthesia was too unwieldy. Additionally, going against the sensibilities of seasoned physicians, the risk did not dissipate even with significant medical experience.14 One English dental surgeon, referred to as “Mr. Walker” (most likely Joseph Walker of Westminster Hospital), had already successfully performed 60–70 procedures with the use of chemical anesthesia when he experienced his first close call:
The method Mr. Vidler recommended, was to drop fifteen minims [of chloroform] on the sponge, and then place the inhaler on the nose…After adjusting the inhaler to the nose for a few seconds, the pulse at the wrist suddenly ceased…not until a full hour had elapsed was [Mr. Walker] at all sure his patient would survive. [Mr. Walker] had not used the nose-inhalers since, nor could he advise the use of them on account of their uncertain action (Parkinson 1868, p. 131).
Because of the known risks of using chemical anesthetics, even after chloroform was widely available, the regular practice in most hospitals remained what it had been: the surgeon himself choked and smothered the patient until she lost consciousness. This too, unsurprisingly, was a cause of morbidity and mortality in patients, owing to not only the physical violence of the act but also the emotional distress which patients would undergo during the process (Koch 1999, p. 34). Rather than being able to fully embrace the new technology of chemical anesthesia, surgeons found themselves simply choosing between two dangerous methods which threatened to undermine the operation before it even began.
7. The Open Drop Method and the Nurse Anesthetists of St. Mary’s Hospital
An alternative method for administering chemical anesthetics was popularized in Germany in the late 1870s, the “open drop method” (Goode 2015, p. 53). A wire mask was placed around the patient’s mouth and nose, covered in layers of cloth or gauze, upon which the surgeon administered ether or chloroform slowly, a drop at a time (p. 52). One of the original devices for this procedure, the Esmarch mask (eponymous of its maker, German surgeon Dr. Johann Friedrich von Esmarch), was designed for convenient use in the battlefield (Gordon and James 2019, p. 25). The system was simple and compact―presumably, befitting the context of its use, wherein portability and timely application far exceed the need for precise measurement. This was a device born out of practical necessity. The gauze and bottle for anesthetic fit snugly inside the wire mask, which then fit inside a slim leather carrying case with a shoulder strap (Wood Library Museum n.d.). Despite being originally intended for expedient field use, the open drop method soon showed advantages for regular medical practice. It was fully embraced by St. Mary’s Hospital (later known as the Mayo Clinic) in Rochester, Minnesota, after a visiting German anesthetizer explained the method to the surgeons and nurse anesthetists in 1885 (Goode 2015, p. 52).
Because of the influence of the Mayo brothers, William Mayo and Charles Mayo (along with their father William Worrall Mayo), who were prominent and internationally influential surgeons, St. Mary’s was a particularly fertile teaching hospital to receive the new method from abroad. Firstly, the Mayo brothers emphasized the role of the hospital as a place for practical education and dissemination of knowledge, as well as a forum for welcoming new ideas and methods from visiting physicians and surgeons. Secondly, and perhaps more importantly, the Mayo brothers believed that anesthesia was itself a procedure that required more care, attention, and hands-on expertise than the surgeon could provide before (and while) performing the surgery. The Mayo brothers created positions for dedicated nurse anesthetists, a job which had been widely considered far too important and precarious for anyone but surgeons (as highly educated men) to handle. At St. Mary’s, the nurse anesthetics (typically women) were expertly trained and trusted to be responsible for inducing and maintaining anesthesia throughout the surgical procedure. The Mayos’ appointment of nurses to this critical task was not only groundbreaking, it was seen as unorthodox and disturbing to the medical community, and later inspired lawsuits arguing against the ability of St. Mary’s nurses to successfully fulfill the responsibility of administering general anesthesia (Goode 2015, p. 52).
One such St. Mary’s nurse anesthetist, Alice Magaw (later, Kessel), learned and transformed the German “open drop method” into a flawless technique in her hands. Over the course of seven years, she published five papers on the 14,000 patients she had successfully anesthetized without a single anesthesia-related death, at a time when women were neither expected nor allowed to join medical societies or publish their work without express invitation and permission (Goode 2015, p. 52; Magaw 1899; Magaw 1900; Magaw 1901; Magaw 1904; Magaw 1906). Her articles were used, successfully, in the legal defense of the existence of nurse anesthetists as medical practitioners, as proving the capability of St. Mary’s nurses and demonstrating increased patient safety due to their dedicated expertise. Magaw quickly became the only practitioner at St. Mary’s Hospital as renowned as the Mayo brothers themselves, who christened her “Mother of Anesthesia” for her many achievements (Goode 2015, p. 54).
It should be noted that, although Magaw’s development of the open drop technique was internationally acclaimed and referenced (see Nelson and Wilstead 2009, p. 14), Magaw was far from the first person to see the potential value in continuous administration.15 More than a decade before the Esmarch mask was designed, continuous methods of some form or another had received attention off and on in England as early as the 1860s (although it was then regarded as a “crude, wasteful, and inexact method of administration”) (Matioc 2017, p. 396). Magaw’s addition was to place more emphasis on communicating with the patient during the process, as well as due attention on the patient’s mental and physical state (as will be explained in detail below).
My focus on Magaw’s development of the open drop technique is not only because of her role in perfecting the procedure for wide-spread and reliable use, but also because of her attention to communicating to the audience of practice. We have vivid descriptions of her method, as a practitioner at an internationally renowned teaching hospital which emphasized robust notetaking as well as regular demonstrations and publications. Her attention to carefully communicating her physical and mental process was perhaps also due to her rather controversial and unconventional roles, as both anesthetist and instructor of nurse anesthetists. Magaw’s very position was considered by some to be unethical and irresponsible. To quote one surgeon professor’s instructional manual, “The giving of an anesthetic should never be delegated to a nurse…Nurses, owing to their incomplete medical education, of necessity, are irresponsible anesthetists” (Heineck 1901, p. 9). Much was at stake in overcoming these prejudices.
Magaw’s thorough descriptions reveal how the drop played an essential role in the solution to the problem of general anesthesia. While Magaw’s techniques have been noted as embodying the evidence-based turn of the twentieth century and the return of bedside and patient-oriented medicine (Goode 2015, pp. 54–5), the unit which allowed for her technique to take place has been overlooked. We will soon see just why the aptly named “open drop method” necessitates the drop, and how continual and gradual administration of anesthesia enabled Magaw to guide a patient along the edge of a dangerous physiological threshold.
8. Magaw’s Approach to Precision Medicine: “Carrying the patient along the edge of the precipice”
As previously noted, the common practice was for surgeons to choke the patient prior to surgery, or make an educated guess at the proper amount of anesthetic to administer that could keep the patient unconscious for the duration of the procedure. In Magaw’s description of her procedure, importance is not placed on the exact amount of ether or chloroform which is given to the patient for inhalation, but rather attention is placed on the patient herself and her physical and mental state. In an address to the Minnesota State Medical Association, she explains her process:
During the administration of either ether or chloroform prepare the patient by explaining just how you expect the anesthetic to affect him, and go through each stage, giving him your undivided attention. In fact, talk him to sleep, with the addition of as little anesthetic as possible (Goode 2015, p. 54; Magaw 1904).
In administering ether, we commence with the drop method as carefully and with as much air, as though it were chloroform, until the patient’s face is flushed, when we have a large piece of surgeon’s gauze of several thickness convenient and keep adding a few more layers of the gauze and giving ether a trifle faster until the patient is asleep, then remove the gauze and continue with the same covering as at the start and by the drop method. Should it produce difficult breathing, profuse secretions of mucus, or cough, or should the muscles be slow to relax, change to chloroform (Goode 2015, p. 52; Magaw 1904) [emphasis mine].
This process stands in contrast to previous attempts to solve the problem of general anesthesia overdose by use of a breathing device. For example, J. T. Clover’s bag inhaler, designed in 1862 (as well as a more portable version in 1877), intended for an exact number of minims to be administrated, to ensure that the air would maintain a precise, constant concentration of the chemical. The inhaler consisted of a hard rubber mask which enclosed the patient’s face so as not to let fresh air in, and a large bag that the patient would breathe into. The bag was inflated by a bellows and was injected with a certain number of minims to produce a given desired concentration of anesthesia in the air (Sykes 1960, p. 8). For example, 25 minims of chloroform was said to produce 28 cubic inches of chloroform gas in the bag (which, at least nominally, was to contain 1000 cubic inches of air), thus a fixed concentration of 2.8 percent. Clover’s inhaler was “designed to make an overdose of chloroform absolutely impossible,” but by 1873 this was shown to be a failed promise (p. 1). While blame for accidental death was attributed to overestimating the size of the bag (and thus underestimating the concentration of chemical inside it), incomplete inflation of the bag, or simply the inconvenience and hassle of operating the bulky device, it was just as easily the case that Clover’s inhaler was subject to the same problems other inhalers were criticized for several decades later, such as F. E. Junker’s inhaler.
One 1898 model of Junker’s inhaler (which underwent several modifications by other surgeons), which was a complicated series of pipes, valves, and featured two graduated measuring vials, was purported to be precise down to 1/100th of a minim. Yet, for all its precision, one critical article in the Lancet describes a death resulting from its use, noting that the rubber face mask obscured the patient’s cyanosis (bluing of skin, indicating low oxygen levels) such that it was not noticed in time. As previously noted, the rubber mask which formed a seal on the patient’s face was necessary to ensure that the air breathed in was precisely the intended concentration which was in the bag. The author argues that “to give chloroform or any other drug with safety one wants to know not the amount circulating in the blood but the effect which it is having on the tissues,” and denounces the device as “at its best…little more than an economizer of chloroform” (Lane 1900, p. 23). Another less-critical article agreed that clever machinery did not replace a user with skill and sense, but noted also that the machine was to be praised for its precision (Edmunds 1900, pp. 227–28). Even in the face of evidence that increasingly precise devices did not prevent death, precision of input was still regarded by some as both the problem and the solution.
Unlike Magaw’s method, which allowed her to switch from large drops to small drops after noticing the patient was asleep, these devices prided themselves on keeping an exact, constant input. Additionally, these inhaler devices obscured the face of the patient, and were either bulky or complicated to operate (due to the extra attachments needed for obtaining precision of input), taking attention away from the patient. Without attention on the patient, it was easier to miss vital information that indicated, to a careful practitioner, that an increase or decrease in the rate of administration was needed accordingly. This quick responsiveness to feedback was particularly salient in this context; this was a measuring process for which the correct amount, though unknown, was known to be very sensitive to each particular patient. While it was acknowledged that the patient’s sex, body composition, pre-existing conditions, and even mental state at the moment of operation could impact the necessary dosage (Heineck 1901, p. 78), knowing exactly how to adjust dosage accordingly was difficult to ascertain beforehand.
Magaw’s method was patient-specific and qualitative, but clearly not less epistemically valuable for its lack of precise quantification of input. Magaw is quoted as saying, “in giving an anesthetic remember that you are, as it were, carrying the patient along the edge of the precipice, and while there is no need of going over you must watch not to get too close to the edge” (Goode 2015, p. 52). Because each patient had a unique physiological “edge,” what was essential to the success of Magaw’s method was that it communicated to the audience of practice. She gave instructions for the hands-on and eyes-up process of administering not too much, not too little, but just enough at just the right pace to induce and maintain the ideal physiological state for keeping the patient calm and still.
Here the unit of drops sacrifices some knowledge, such as being able to report an exact amount given to the patient, for the sake of forming other more important knowledge claims, particular to the patient and most relevant in the moment of administration. Uniform units, such as minims, convey a level of precision, universality, and encourage trust of previously proven trials. Drops, on the other hand, communicate a certain level of approximation, individuality, and even mistrust of previous trials to apply to the current one. More importantly, drops communicated a heuristic process: One at a time. Observe. Use your judgement. Continue until observing change. It is hard to miss the obvious analogy between Magaw’s process and the technique of titration. Much like determining a chemical concentration by adding small amounts of liquid, by slow stream or by drop, until a chemical reaction occurs, hers was a process of individually titrating the ideal patient mental and physiological state. This required the use of a unit which could be administered approximately and continuously.
To be fully convinced of this claim, we should consider whether drops are truly necessary for accomplishing this task, or whether minims could also be gradually administered with Magaw’s method. Discretely measuring exactly one minim at a time and depositing it, as has been noted, takes time and careful attention, requiring cleaning of the instrument (not to mention wasted fluid), to achieve anything close to precision. As explained, attention must be focused on the patient during this process, to prevent overdose. Alternatively, measuring a certain number of minims in a vial beforehand (enough for the duration of the procedure), still leaves aside the question what unit of actual administration will be used. If we simply pour out approximately 1 minim at a time from the vial, this is difficult to achieve with precision (the very motivation for using minims), and again, takes more attention away from the patient.
In addition, we must keep in mind that we are not only concerned about the absolute amount, but also the rate of administration. We can think of the relevant unit, within this context, as not simply being volume, but of volume over time. For such time-sensitive purposes (as administration of chemical anesthesia), if a measuring process is time-consuming, the process of measuring the unit changes the very unit itself. The seeming precision and uniformity of minims, and any epistemic virtue we may perceive to gain from knowing precisely how much was administered, would be muddied by irregular intervals of time between each unit added. If we do not know that the units are applied relatively continuously at approximately equal intervals of time, as drops can be, we are less able to interpret the qualitative effects observed, and less able to make safe decisions about increasing or decreasing the rate of application. With less precise decision-making, we are more likely to increase the rate when it should not be increased, or fail to decrease the rate when it should be decreased, and unintentionally cross a dangerous threshold.
Additionally, we cannot fail to notice that drops were not only something advantageous to use in real time, but advantageous for communicating this process in published material. After all, Magaw could have reported the number of minims used for each patient by simply measuring the amount of chemical in the can before and after the procedure; that she instead describes the process in “small” and “large” drops is telling. As thoroughly noted, the total number of minims was irrelevant for instructing practitioners on how to successfully anesthetize patients, and potentially misleading for those who would think themselves secure in simply using the same number of minims and expecting the same outcome.
Standardization, rather than an end, is a means for achieving certain goals. And often, it comes at a cost. To quote Ken Alder, “The price of standards is eternal vigilance” (Alder 1998, p. 528). I would add, the cost is often a redirection of vigilance, from the immediate practical process at hand to abstract, systemic concerns. Sometimes, the cost is worth the benefits. Other times, it is better to have nonstandard measuring practices which can accommodate the demands particular to the measuring tasks at hand. This was found to be the case with the application of the open drop method for inducing anesthesia; while having a standard number of minims was better for recording the absolute amount of the chemical which was administered in total, in order to protect a surgeon’s prestige and reputation, it was not worth the risk to the patient’s life.
Of the two audiences a practitioner could wish to communicate to, one is present at the time of practice, as many practitioners at the education-focused environment of St. Mary’s hospital would have been. This is the epistemic audience to which the use of drops communicated at the turn of the twentieth century, in some cases better and more effectively than the more precisely defined and institutionally-endorsed minim. The use of continuous drops has only expanded since their introduction to anesthetic administration methodology; though today every IV bag a nurse attaches at his patient’s bedside may have the word “mL” printed on the plastic, the gradual administration is in a unit no nurse gives much consideration, nor knows the exact size of, as he checks the patient’s vitals and makes new notes on the appearance of the patient’s face, breath, and talks his patient through the next step in the procedure.16
9. The Gradual Fall of the Minim
Alice Magaw and her use of the open drop method offers resolution to one branch of the story of how the drop emerged victorious, in continuous administration. We now turn our attention back to the development of fluid dosages and ask how and why the standardized minim disappeared from medical practice and terminology, while the drop survives even widespread metrification that takes place in England’s medical and scientific communities.
Indeed, though the minim achieved prestige by the end of the nineteenth century, enough to carry it through the early twentieth century as still the unit of choice in advertised dosages in the British Medical Journal, its standardized superiority was mostly nominal (“Back Matter” 1917, p. 12). Not only were drops still used in practice for both practical and medical purposes, as already evidenced in previous sections, but as the nineteenth century came to a close, it appeared as if the potential improvements to drops were more realistic than improvements to graduated tubes which could measure minims. In an 1880 article in the BMJ, “The Metric System in Medicine and Pharmacy,” the author weighs the pros and cons of adopting metric units, paying special attention to the problems faced by small fluid units:
Thus it is often not easy to measure quantities of fluid by the graduations on glass. The quantity is apt to rise above or sink below the graduated line. Then adhesion may come into play, and, by a sort of capillary attraction, may raise the fluid above the line ; and, lastly, volatility may deprive us of an appreciable quantity of our medicament…In the case of powerful solutions, the difficulty is often solved by starting with a certain number of drops, and increasing the dose gradually till the desired effect is obtained [emphasis mine] (“The Metric System in Medicine and Pharmacy” 1880, p. 858).
Beyond the apparent impracticality of the author’s ultimate answer to the question of the metric system in medicine, what is more noteworthy is that the minim has ceased to be viewed as a viable solution to the problems faced by small fluid units. Regardless, past the turn of the twentieth century, we find that the symbiotic relationship between improvements to the minimometer and the development of the dropper continue, as demonstrated by an article “Medicine Dropper to Deliver One Minim (Apothecary’s Measure) Per Drop,” published in the Journal of the American Pharmaceutical Association (Konnerth et al. 1935). Finally, the minimometer and the dropper become one and the same device, ending a century of struggle between the two units. Yet, the article title is a bit misleading: While droppers can be calibrated to deliver an approximately consistent fluid volume per drop of water, the same dropper used with a non-aqueous liquid will have a different fluid volume per drop. Recall Tate’s Law, which shows that even under controlled conditions, drop size will be directly proportional to a given liquid’s surface tension. Different liquids with different surface tensions, such as water and alcohol, will produce different size drops, even from the same dropper. While drops of alcohol will be approximately similar volume to drops of other alcohol-based solutions, they will not be the same volume as drops of an aqueous solution.
Other units of fluid volume, such as gallons or liters, have been calibrated to correspond to a certain weight of water, or more recently, cubic volume.17 Once calibrated, these units measure out the same volume regardless of which fluid is being measured. In contrast, drops produced by the same dropper under controlled conditions are systematically different for liquids with different surface tension. As noted, a dropper designed to deliver a minim for aqueous solutions does not deliver a minim for other liquids. In that sense, the drop escapes uniform standardization, by virtue of the physical laws through which the unit is produced. Yet, developing a consistent dropper, even if only for drops of the same liquid, was an important achievement, and one that was only possible through continued attempts to realize the promised precision of the minim. Once dropper technology advanced enough to reliably deposit a more controlled, approximately similar volume of fluid per drop of a given liquid, there was no need to keep the minim, neither as a measuring unit nor as a meaningful word. The minim gradually disappears from both practice and literature until its abolition under the UK’s 1985 Weights and Measures Act, along with the fluid scruple and fluidrachm, also obsolete by this time (“Weights and Measures Act 1985”, p. 62)18
10. Conclusion
By viewing measuring units as needing to fulfill the demands of two roles, (1) that of an idealization to be realized by a physical device and (2) that of a communication tool, we can better understand how the benefits of the drop outweighed its shortcomings. Within certain contexts (such as administering or composing small fluid dosages), and for certain epistemic goals (such as determining “the amount administered as correct”), the standardized minim was never able to achieve success as the drop’s intended replacement.
The first part of this paper focused on the struggles of the minim and the minimometer to meet the demands of the first role, as a unit which can be reliably produced by a device. As philosopher Paul Teller points out in his essay on measurement accuracy, there are constraints on units as idealized standards. They must “be realized in some concrete piece of apparatus that will in practice function as the standard, and doing so involves de-idealization from the theoretical definition” (Teller 2018, p. 279). In order to be a successful standard unit, an idealized definition (such as the minim as 1/60th of a fluidrachm) has to be able to operate adequately as a standard in practice, not only on paper.
The constraints on achieving this realization can be physical and practical constraints, oriented within time and space. The physical limitations of the minimometer prevented universal use, and practical limitations wasted fluid and time. The tube required submersion in a container tall enough and full enough to reach the desired graduated markings, which not all uses could accommodate. Additionally, each use required the outside of the tube needed to be wiped clean, an inconvenient step to avoid unintentionally adding liquid to the amount measured.
Alternatively, there can also be epistemic constraints on the realization of standard units, wherein the knowledge we wish to gain from their use is inhibited in the process of using them. Despite there being a way to measure minims, the unit suffered from epistemic issues in its actual implementation. Because of the residual loss of liquid from surface adhesion to the glass inside the tube, which varied across fluids, practitioners could never be sure that the amount measured was actually the amount deposited (nor be sure how much the amounts differed). The physical realization of a standard unit does not just consist of a physical apparatus, but a method for reliably producing the unit within its intended contexts. Because the minimometer was able to overcome neither the physical nor epistemic constraints on the realization of the standard minim, within the contexts it was designed to be used, it failed.19
The second part of this paper drew attention to the role that units play as tools of communication. To explain the rhetorical and heuristic superiority of the drop as compared to the minim, I described two audiences with which these units needed to communicate, the discursive audience and the practical audience. Whereas precision and simplicity were of importance to communication to the discursive audience, other aspects more pertinent to the procedure (e.g., the hands-on measuring process of administering chloroform), were more important to the practical audience. Examining the failure of the minim to communicate adequately to the practical audience helped make sense of why the drop prevailed, despite its non-uniformity.
In the case of the open drop method for general anesthesia, knowledge of the absolute amount itself was not nearly as important as observing the qualitative effects the amount brought about. Drops allowed for attention to be redirected from precise knowledge of the “input” to precise knowledge of the “output” of the procedure. There was no guarantee of what would occur from a certain number of minims of chloroform, and thus the quantitative standard did not translate into reliable qualitative predictions. Drops, however, allowed for qualitative assessment to take place in real time, and thus were more precise, in the way that mattered: precision in the amount required to obtain certain qualities in the patient. When a successful (or unsuccessful) outcome is obvious, and the correct input is highly individual from case to case, it is more important to know the amount administered is precisely correct, than it is to know precisely the correct amount.20
Standard units can be advantageous for good record-keeping and defense in the face of critique after-the-fact, but in the moment, a non-standard unit which prevents defense from even being necessary to begin with may prove superior. The broader lessons that drops, the non-uniform unit that survived the century of standardization, offer us are these: the kinds of knowledge which can convey practitioner prestige are not always as important as the kinds of knowledge which can save the lives, exact knowledge which applies to aggregate populations may be less epistemically valuable than approximate knowledge which can be safely applied to each individual, and quantitative precision is only as good as it is useful.
Notes
For examples of studies on medications which are currently administered in drop dosages, see the following: Frenkel et al.’s 2008 study of a single drop dosage for glaucoma medication; Hansen et al.’s 2014 study of a two-drop dosage for infant oral vaccine for polio; Chau et al.’s 2015 study of a five-drop dosage for infant colic medication.
To clarify my use of the term “epistemic” in this paper, I point particularly to L. Mari’s paper on the epistemology of measurement in the sciences, which characterizes the epistemic role of measurement as “an evaluation whose results are informationally adequate to given goals” and measures as “results of operations recognized as adequate for their goal of obtaining information on measured things” (2003; p. 18, 24).
RCPL 1809, pp. 4–5. Note that the term “fluidrachm” is used, replacing “fluid drachm”; in this issue of the Pharmacopoeia, fluid unit names were consciously revised to be distinctive from their weight counterparts. E.g., to alleviate confusion between a dry ounce (as a unit of weight) and a fluid ounce (as unit of “measure,” what we would now call volume), the term “fluidounce” was coined. This careful attention to nominal clarity further evidences that this series of unit revisions was not only aimed at achieving better precision of units and devices, but also for achieving better precision of nomenclature and communication.
The “exchequer wine gallon” was standardized by an act of parliament to be 231 cubic inches, which weighed 58,176 Troy grains when filled with water at temperature of 63 degrees Fahrenheit, and barometric pressure of 29.52 (RCPL 1809, p. 6).
In my own reading of Martindale’s article, which Crellin and Scott cite as possibly influential, I see no evidence that Martindale was concerned enough about issues regarding small fluid measures to be highly influential on the subject (see Martindale 1876). First, Martindale’s article is primarily a reply to a “Mr. Proctor” about the use of teaspoons and tablespoons as units/devices for dosage. His discussion of drops is tertiary. Second, his article does not even recognize that the minim was intended to be a measure of volume, not weight, signaling that he is not particularly well-read on the subject. Third, it is not clear that Martindale ever intended to replace the minim with a standardized drop (“goutt”), as Crellin and Scott imply; he simply notes that in his experience drops weigh differently from minims and should be distinguished from them. However, Martindale does offer a conversion factor (1 goutt = 0.05 cc = 0.85 minims), which mirrors the conventional definition for “drop” used in pharmacy today, as 1/20th ml. It is difficult to establish whether this convention (however far from empirical reality it may be) can be traced back to Martindale, although it is certainly possible.
For example, if a certain ingredient was only sold in containers of a certain size, or if an organic substance (such as blood or venom) needed to be taken directly from an organism.
This intuition is mostly justified, despite some distinct differences between Philo’s tubes/vials and the droppers used today. For example, Philo suggested that the vials might have glass stoppers at both the top and bottom, to prevent evaporation of fluid, as the minimometer was guilty of allowing. Also, Philo expected that liquid will be poured into the vessel, rather than sucked upwards, as is the case with the modern dropper’s rubber bulb.
There is, however, a small portion of the pendant drop which remains attached to the dropper, which, if not accounted for, would lead us to slightly overestimate the surface tension coefficient. For this reason, in contemporary usage, correction factors have been added to Tate’s Law. For details, see Moradian and Mostaghimi 2011.
It should be noted that weights and measures were a special consideration for editorial oversight in Cooley’s Cyclopaedia, according to the publisher’s preface, due to the possible confusion caused by differences in the units used in medicinal and industrial/trade contexts (see p. vi).
To give an example of the careful editing of this volume, see the “Errata to be Amended” in the backmatter of the subsequent 1815 publication of the 1809 Pharmacopoeia. Several of the errata relate to units of measurement; for example, “[On page] 247, for ‘Rose water, seven ounces and a half,’ read ‘fluid-ounces and a half’.” Yet, the aforementioned reference to drops of opium remains untouched in the 1815 reprint, located on page 279. Thus, the continued inclusion of drops as a unit was unlikely to be simply due to a lack of editorial attention.
Similar instances can also be found on pages 79, 95, and 102 of the 1809 Pharmacopoeia.
It has been suggested to me that these two audiences run parallel to the two epistemic categories introduced by Gilbert Ryle (1945), “knowing that” and “knowing how.” I will note that my emphasis is on audience, and what tools are chosen for the transference of knowledge from one individual to another, regarding the perceived aims of the audience. My emphasis is not on assigning an epistemic status to the type of knowledge the practitioner has to begin with. While there may be fruitful connections to the “knowing that/how” literature, I believe nothing is added to this current discussion by the use of these terms here, and so any further connection is left to the reader to investigate.
Note that chloroform was significantly riskier for the patient than ether, although ether was more likely to cause irritation to the patient (see Matioc 2017, p. 396). Ether’s reduced risk also applied to the surgeon’s reputation; as Alice Magaw (whom I introduce later) notes, “Public opinion is so much in favor of ether at the present day, that if accidents in its use occur, the surgeon will not be blamed,” using this as one more reason to adopt her preferred method of using ether in all appropriate cases (see Magaw 1906).
For an excellent discussion of the “epistemology of individual experience” which British clinicians saw as not only their primary and most reliable source of knowledge but even constitutive of their very character and professional role, see Lawrence (1985).
For an example of her work’s international influence, see German surgeon Hans Wildbolz’s 1913 work, which credits Magaw’s 1905 and 1906 papers (p. 421).
Note that, despite drops being different sizes across fluids, the unit of “drop” is still used to calibrate and classify IV drips. Today, IV microdrips are calibrated to administer approximately 60 drops per mL; IV macrodrips are calibrated to release approximately 10, 15, or 20 drops per mL. This is reminiscent of Magaw’s use of both a “small dropper” and a “larger dropper.”
For example, in 1824 the British gallon corresponded to 10 Avoirdupois pounds of distilled water (Great Britain 1824, p. 639); today, one metric liter of water is equal to a cubic decimeter.
This act was later amended to allow many Imperial units (including the minim) to be used in trade as supplementary indications only, which is the current policy as of this writing. Interestingly, the United States has never formally abolished its customary units, but merely declared noncompulsory recommendations, such as the Metric Conversion Act of 1975 (Ford 1975).
The minim does not stand alone in the history of science as the only failed attempt at standardization; it is likely that historical accounts of failed standards are rare due to lack of attention, rather than lack of existence. Another example can be found in Jutta Schickore’s account of test objects as standards for assessing microscopes. Schickore explains “limits of stabilization” which prevented a single standard from ever being achieved, and also how the failed attempts at standardization helped practitioners better understand the very thing they were attempting to measure (quality microscopy) (Schickore 2009, p. 139).
For another example of a non-standard measuring practice which was appropriate (despite its non-replicability) because successful measuring outcomes were obvious, see Harry Collins’ account of the measurement of sapphire quality and the use of “Fred-grease” in the procedure (Collins 2001). Because there was a very low chance of achieving falsely higher measured energy dissipation (it was much more likely that mistakes would lead to a falsely lower estimate), a higher number could be counted as a better estimate. Any method that could achieve a higher number was fair game; thus, experimenters used the oil behind different experimenters’ ears to grease the string holding the sapphire, as it was noted that some ear grease worked much better than others in achieving a higher (better) estimate of sapphire quality. Clearly, this method would be difficult for other experimenters to replicate with the same necessary materials. None knew why it was that some skin oils worked better than others, and yet, a clear outcome helped them determine whether their measuring process and instruments were more accurate or less, and replicable procedures became less important to ensure.
References
Author notes
I would like to express my appreciation to Jutta Schickore, Jordi Cat, Sander Gliboff, Domenico Bertoloni Meli, and Amit Hagar for their helpful comments and guidance on earlier versions of this manuscript. I also owe gratitude for the feedback I received from participants in the Measurement at the Crossroads Conference in Paris, where I first presented the core arguments in this paper. I would also like to thank Evan Arnet, Elana Rakoff, Brenton Wells, Paul Shephard, Ali Mirza, and Sophie Osiecki for probing criticisms, suggestions, and encouragement from the earliest brain-storming to the last read-through. This manuscript also benefitted from the detailed attention of Fabien Gre´gis and Nadine de Courtenay, as well as excellent questions and critiques from anonymous reviewers. Thanks also to the careful eyes of unknown copyeditors at MIT Press.