Why Thomson and Maxwell Didn’t Like the Way Faraday “Spoke” About Physics

The author visiting Michael Faraday in London, July 2019
The author visiting "Michael Faraday" in London, July 2019

Let’s begin with some facts:

  1. In 1905, Albert Einstein published “On the Electrodynamics of Moving Bodies,” the paper that would reframe physics based on his special theory of relativity and ultimately render Victorian ether theories obsolete.
  2. Though he credits Maxwell, Einstein begins “On the Electrodynamics of Moving Bodies” with a description of Michael Faraday’s field theory, specifically his definition of electromagnetic induction [1].
  3. In 1907, Einstein declared that the “happiest thought of his life” was when he constructed an analogy between the electromagnetic field and the gravitational field. Victorian physics, and specifically the radical departure from Newtonian mechanics, catalyzed Einstein’s special theory of relativity.

After Einstein published his special theory of relativity in 1905, fields and field phenomena forcefully entered the twentieth-century mainstream. However, as Kieran Murphy has recently argued, Einstein’s rise to prominence obscured our treatment of nineteenth-century conceptualizations of electromagnetic fields in science (and in literature). We owe to Michael Faraday much of the massive imaginative leap from Newtonian physics to field theory.

Faraday’s work captivates me. He doesn’t have the kind of celebrity that Einstein and Maxwell do; yet he was an equally important contributor to physics, particularly through his unparalleled imaginative insights and experimental rigor. I am pictured above, grinning next to “Faraday” during a visit to London in July 2019. No short posting can do justice to Faraday’s work. Nevertheless, here I will gloss some key details of Faraday’s theory of fields, and of their reception in the mainstream scientific community. Specifically, I will focus on Faraday’s unorthodox approach to problem-solving, and his departure from the conventions of institutional science. Because Faraday lacked the classical education of his colleagues, his theories forged a new path away from traditional Newtonian force relations and their mathematically predictable, clockwork motions of matter. However, Faraday’s imaginative trailblazing was taken seriously by the scientific community only after his classically trained colleagues, most prominently William Thomson and James Clerk Maxwell, translated his theories into the language of conventional mathematics.

Electromagnetic Induction and Invisible Fields

Throughout the 1830s, a self-educated experimental physicist named Michael Faraday investigated a fascinating but perplexing link between electricity and magnetism.

Michael Faraday, Lithograph.
Michael Faraday. Lithograph. Photo by The Wellcome Collection

In 1819, the Danish physicist Hans Christian Oersted had discovered that a current-carrying wire deflected a magnetic compass needle nearby, a finding he published in 1820 [2]. Almost immediately afterward, Faraday launched into his career-long study of the interaction between electricity and magnetism, or what he later considered two linked manifestations of the same phenomenon.

Though Oersted was first to demonstrate a link between electricity and magnetism, it was Faraday who developed the theory of why a moving magnet generates an electric current in a nearby wire. This phenomenon is called “induction,” an electromagnetic effect that opened the doors for many of the technological transformations of the mid-to-late nineteenth century. In 1831, Faraday first reported his discovery of electromagnetic induction, arguing that electric currents were induced in a changing magnetic field, or when a conductor “cut” what he called magnetic “lines of force.” These lines of force, made visible by iron filings in the space around magnets, formed the backbone of Faraday’s theory. They demonstrated physical activity in the space around current-carrying wire, and around magnets.

Faraday Lines of Magnetic Force
Faraday's Lines of Magnetic Force, printed in [3], 1952

Faraday radically rejected several long-held Newtonian assumptions. First, he dismissed action at a distance, the notion that forces like gravity, electricity, and magnetism transmit apparently at a distance through ethereal mediation or some other agent whose action is obviated by mathematical accuracy. For example, when two planets exert a gravitational force on each other, nineteenth-century scientists could mathematically model those forces, but could not explain how gravity operated across the distance separating those planets. Where many scientists accepted the math without recourse to physical explanation, Faraday did not accept the blackboxing of such phenomena and therefore developed a physical explanation to support mathematical models.

Second, he eventually scrapped the need for the ether. The luminiferous ether was considered an invisible medium permeating all space through which “imponderable” phenomena like heat, light, and electromagnetism traveled. Faraday pictured electromagnetic action as a tension across space involving contiguous particle transmission. This might tempt us to conjure up a mental picture of ether, although Faraday was fairly convinced by the 1850s that his lines of force did not need ether mediation [5].

Third, he rejected the idea that electricity was a fluid. As Faraday’s theory of the field evolved, he increasingly insisted that the space around conductors was an active component of the electromagnetic energetic system. And, as we will see, Faraday’s interpretation of that active space differed substantially from those of his younger contemporaries, William Thomson and James Clerk Maxwell.

“Translating” Faraday

Despite Faraday’s insights, his background as a self-educated son of a blacksmith impeded his credibility in the mainstream scientific community. William Thomson and James Clerk Maxwell famously cast Faraday’s work into mathematical notation, yet their translation of Faraday’s field theory denatured some of his original intuitions, reinserting fluid analogy and mechanistic explanation where Faraday had removed it.

To better understand Faraday’s intervention in physics, let’s first cover the basics of Newtonian scientific convention, and the various schools of thought that upheld dominant theoretical methodologies during the Victorian era.

Energy science evolved out of a departure from traditional Newtonian physics, which was governed by two intellectual trends: mechanization and Neoplatonist mathematization.

  • Mechanization: mechanical philosophers such as Descartes held that physical phenomena were the clockwork motions of matter.
  • The Neoplatonists, like the astronomer Johannes Kepler, argued that the world was structurally representational, held together by mathematical law without need to defer to physical explanation [6]. In this view, math was the Platonic ideal: the pure, true reality. Physical structures were inferior to mathematical forms.

The sticking point of the Newtonian worldview was the problem of representing forces that are transmitted at a distance, like gravity, electricity, and magnetism. Newtonian scientists handled this problem variously. Cartesians argued for ether mediation and subtle fluids that transmitted forces across space. Alternatively, others argued that void space was still possible because the mathematical models were empirically successful without describing any agent of transmission [6].

The dawn of nineteenth-century energy physics ushered in a new era of scientific investigation during which action at a distance drew sharp controversy. Among its most relentless critics, Michael Faraday refused Newtonian action at a distance, which, as his contemporary John Tyndall explained, “perplexed and bewildered him” throughout his entire life [5]. Faraday refused to accept action at a distance on the premise that, basically, the math just worked out.

Faraday lacked the classical mathematical training that would have groomed him to accept math as metaphysics. He entered the scientific fold by working as Sir Humphrey Day’s assistant at the Royal Institution. Despite his humble beginnings, Faraday ultimately surpassed even Davy’s expertise and matured into one of the nineteenth century’s most imaginative thinkers. By the 1830s, he had challenged action at a distance, uncovered electromagnetic induction phenomena, and suggested that “lines of force” conveyed invisible influences between bodies.

And yet, despite his experimental successes, Faraday’s fresh approach to physics came at the price of his requiring “translators” for the benefit of the greater scientific community.

Joseph Turner describes Thomson’s and Maxwell’s treatment of Faraday as “compar[ing] Faraday’s lines of force to more familiar notions” [7], that is, assimilating his theory into well-accepted scientific and mathematical conventions. As Faraday’s younger and more educated contemporaries, Thomson and Maxwell were initially wary of the way Faraday “spoke” about physics [2]. In other words, the vestiges of his working-class background lingered in his unorthodox approach to science. Thomson even recalled that he had rejected the little he knew of Faraday’s ideas in the early 1840s [8], despite that Thomson is widely credited with understanding and employing Faraday’s theory during that period.

Maxwell addressed his own former skepticism in a contrite appeal to readers in the 1873 preface of his Treatise on Electricity and Magnetism, six years after Faraday’s death, explaining,

“I was aware that there was supposed to be a difference between Faraday’s way of conceiving phenomena and that of the mathematicians, so that neither he nor they were satisfied with each other’s language… As I proceeded with the study of Faraday, I perceived that his method of conceiving the phenomena was also a mathematical one, though not exhibited in the conventional form of mathematical symbols. I also found that these methods were capable of being expressed in the ordinary mathematical forms, and thus compared with those of the professed mathematicians… When I had translated what I considered to be Faraday’s ideas into a mathematical form, I found that in general the results of the two methods coincided, so that the same phenomena were accounted for, and the same laws of action deduced by both methods, but that Faraday’s methods resembled those in which we begin with the whole and arrive at the parts by analysis, while the ordinary mathematical methods were founded on the principle of beginning with the parts and building up the whole by synthesis” [9].

As Maxwell describes it here, Faraday’s methods were viable yet needed the coaxing of a trained mind into “ordinary mathematical forms.” What ultimately matters to Maxwell is the end result of his translation labors: that Faraday’s method agrees with the already accepted conventions of mathematics. Although he validates Faraday, he is less concerned with the imaginative differences language produces in arriving at the convergence, the solution.

This is a statement with which some readers may disagree, especially considering Maxwell’s famous argument for the power of knowledge creation by “Physical Analogy.” And it’s true that Maxwell, himself, was deeply invested in figurative language as a theoretical modality. Nevertheless, I maintain that this very heuristic of analogizing, on the parts of both Maxwell and Thomson, indirectly reinscribed the same conventional knowledge that it purported to transgress. Faraday was theorizing fields; they reinstated the dominant mechanistic fluid models to describe what Faraday “meant.”

Since Maxwell and Thomson applied fluid analogies to cast Faraday into standard mathematics, one way to approach their respective translations of Faraday is to ask what analogy meant to these scientists. Maxwell introduced physical analogy as a “mean” between pure mathematics and physical hypothesis. He believed that analogy closed the gap between two equally practical scientific methods, while simultaneously producing new knowledge [7]. Thomson, on the other hand, employed analogy where he needed to make sense of one branch of physics in terms of another.

Most historians of science accept that Thomson synthesized Faraday’s ideas with the theory of heat flow to describe how current travels along wires. Yet Thomson swept electrostatics into a heat transfer analogy because it provided a new way of seeing, or of knowing, a phenomenon that was not readily seeable or knowable to him. He was already involved in the study of heat transfer, and he used Faraday’s research as a creative tool to serve an established epistemological framework.

As language mattered to these scientists, the distinctive way Faraday “spoke” and wrote about physics matters a great deal to the interpretation of his theory. He articulated a clear need for mathematically trained scientists to deconstruct their theoretical conjectures in terms more concrete than symbolic representations of physical phenomena.

For instance, when André-Marie Ampère reduced magnetism to the motion of fluid currents, Faraday denounced Ampère’s conclusion as the ad hoc outcome of mathematical discovery without recourse to demonstrating his process of investigation [2]. By contrast, Faraday did provide extensive experimental demonstration and logical reasoning for his theory, which is why Thomson’s and Maxwell’s return to fluid analogy is, to my mind, somewhat perplexing.

In the course of translating Faraday, Maxwell turned Faraday’s description of the electromagnetic field into a flux analogy, which we still employ in a traditional physics education. Maxwell asserted that “in every case the motion of electricity is subject to the same condition as that of an incompressible fluid” [10], and further encouraged readers to consider dielectrics, or special materials that resist electric current, as elastic meshes that hold this liquid in place [11].

Thomson arrived at Faraday’s work earlier than Maxwell, and did not translate and extend Faraday so much as merely assimilate him into Thomson’s own projects. The traditional interpretation of Thomson’s work asks us to assume that he accepted Faraday’s theory by the early 1840s, while working on an analogy between electrostatic “flow” and heat “flow.” However, as Jed Buchwald compellingly demonstrates in his comparison of Thomson’s and Faraday’s work, “Thomson in 1845 was introducing theoretical notions foreign to Faraday’s theory, about which he was not altogether clear… Thomson’s new formulation was afterwards seen as the essence of Faraday’s electrostatics” [8].

Despite Thomson’s and Maxwell’s inarguably crucial roles in developing electromagnetic field theory, Faraday’s original ideas do differ markedly from their translations.

Science and Technology Studies (STS) scholars have thoroughly discussed the machinic tendency of western science to flatten varied and layered perspectives into convention [12]. Assimilating Faraday into an already-accepted scientific tradition hammered one mode of representation into another, dominant one. In other words, scientific credibility depends on a number of variables, including a scientist’s ability to follow institutional convention. Faraday was brilliant, and he thought “outside the box,” as we say, but none of that mattered to institutional circles until his theories could be made legible to the authorities of Victorian science.

In their book, Laboratory Life, Bruno Latour and Steve Woolgar argue that “cycles of credit” underpin the motivation for scientific advancement. “Credit as credibility” operates as a commodity that can be accumulated, circulated, and/or lost. Scientists’ projects are thus “investments” within a credibility economy where success of the project furthers the symbolic capital of the scientist [13]. Although we cannot make direct comparisons between the twentieth-century laboratories that Latour and Woolgar studied and Victorian institutional science, we can argue that knowledge production in the Victorian era was no less restricted by convention and the prestige economy than sciences of recent decades. In fact, Crosbie Smith has reinforced this argument by claiming that “the pursuit of national credibility” [14] was a core objective of the first framers of energy physics.

While Faraday’s work made space for the large-scale development of field theory, it was the translation of value from Faraday’s ideas to Maxwell’s and Thomson’s interpretations of those ideas that cemented field theory as legitimate institutional science. Faraday was indeed employed by the Royal Institution, and thus operated within the structure of institutional knowledge production, but it would be a misstep not to acknowledge that his original theory emerged as a rupture in institutional thinking. Moreover, had Faraday not made that giant imaginative leap about electromagnetic induction, Einstein might not have had his “happiest thought.” In a system that rewards investment in “the pursuit of national credibility,” we owe much to the perspectives that thoughtfully diverge from those conventions.  


[1] Einstein, Albert. “On the Electrodynamics of Moving Bodies.” Annalen der Physik 17. 1905.

[2] Purrington, Robert D. Physics in the Nineteenth Century. Rutgers University Press. 1997.

[3] Faraday, Michael. “On the Physical Character of the Lines of Magnetic Force,” Philosophical Magazine and Journal of Science, Fourth Series, 3, no. 20 (June 1852).

[4] Murphy, Kieran M. Electromagnetism and the Metonymic Imagination. The Pennsylvania State University Press, 2020.

[5] Jones, Bence. The Life and Letters of Faraday, vol. 2, 2nd Ed. Longmans, Green, and Co., 1870.

[6] Cao, Tian Yu. Conceptual Developments of 20th Century Field Theories, 2nd ed. Cambridge University Press, 2019.

[7] Turner, Joseph. “Maxwell on the Method of Physical Analogy.” The British Journal for the Philosophy of Science 6.23, 1955.

[8] Buchwald, Jed. “William Thomson and the Mathematization of Faraday’s Electrostatics.” Historical Studies in the Physical Sciences 8, 1977.

[9] Maxwell, James Clerk. Preface to A Treatise on Electricity and Magnetism, First ed., Vol. 1, 1873. Dover Publications, 2016.

[10] Maxwell, James Clerk. A Treatise on Electricity and Magnetism, First ed., Vol. 1, 1873. Dover Publications, 2016.

[11] McAulay, Alex. “On the Mathematical Theory of Electromagnetism.” Royal Society 183, 1892.

[12] Bruno Latour’s work is notable here. Latour argues that inscription devices translate and flatten transverse perspectives into written documents. See Latour, Bruno. Pandora’s Hope: Essays on the Reality of Science Studies. Harvard University Press, 1999.

[13] Latour, Bruno, and Steve Woolgar. Laboratory Life: The Construction of Scientific Facts. Princeton University Press, 1986.

[14] Smith, Crosbie. The Science of Energy: A Cultural History of Energy Physics in Victorian Britain. University of Chicago Press, 1998.

maxwell's demon
Photo by Needpix

So far in this series, we have discussed how Maxwell’s Demon emerged in 1867 as a thought experiment designed to reveal the statistical nature of the second law of thermodynamics. If you’re new to this thread, be sure to check out the historical background and politics of Maxwell’s Demon in Part I. Part II takes us to the Information Age, where the question of entropy reframes Maxwell’s Demon for a different set of cultural concerns. Therefore, I want to begin Part II of “A Tale of Two Entropies” by hopping forward roughly a century to 1965, when Thomas Pynchon published his short novel, The Crying of Lot 49. By that time, both entropies had circulated in the public imagination such that Pynchon could satire the Demon holding them together.

A Literary Prelude:

As an expert of Victorian literature and science, I claim no mastery over postmodernism or the oeuvre of Thomas Pynchon. With that disclaimer out of the way, I’m now going to muddle through a gloss of entropy in a passage from The Crying of Lot 49. This novel captured my interest when I first read it as an undergrad, and it has remained a personally generative text as I’ve learned more about the Victorians over the years. Despite the definitively postmodern exploration of representation and semiotics, there is, to my mind at least, something surprisingly Victorian about how the characters in this novel grasp at representations to render a physical reality. And equally, yet ironically, as Bruce Clarke puts it, there is also something “un-Maxwellian, if not un-Victorian” about that same point.

After all, Maxwell’s particular strategies were successful indeed because his models remained metaphorical or analogical [1]. Victorian physics relied on various strategies of representation, from mathematics to metaphor, to coax energy out of its capacious and imponderable invisibility. Yet Maxwell, unlike some of his contemporaries, never let those “factual fictions” [1] stand in for reality. He instead used them as scaffolding or mediating devices. As such, the Demon is an aid for Maxwell, rather than something to be hardened into anthropomorphic form or objective agency. Pynchon’s novel plays with this Victorian question in a postmodern context.

By and large, The Crying of Lot 49 is not “about” entropy; but it does place Maxwell’s Demon in the center of a larger allegory on communication and representation. What concerns us here, specifically, is the “Nefastis Machine,” or Pynchon’s satirical version of a perpetual motion engine. Spoofing on Victorian spiritual and telepathic valences of electromagnetic field theory and the luminiferous ether, Pynchon positions “communication” at the nexus of thermodynamic entropy and information entropy.

The Nefastis Machine is a box containing an “actual” Maxwell’s Demon with which a “sensitive” individual communicates by staring at a picture of Maxwell in profile [2]. Inside the box, the Demon sorts gas molecules, and then telepathically communicates the information about those molecules to the sensitive. The sensitive feeds that quantity of information back to the Demon (again, telepathically), and a piston moves.

James Clerk Maxwell in profile; A Tale of Two Entropies
James Clerk Maxwell in profile; Frontispiece of Matter and Motion (1876): Source [2]

Nefastis, the machine’s inventor, explains his fascination with entropy to the novel’s protagonist, Oedipa Maas, but she finds the entire affair confusing and overwhelming [3].

“He began then, bewilderingly, to talk about something called entropy… She did gather that there were two distinct kinds of entropy. One having to do with heat-engines, the other to do with communication. The equation for one, back in the ‘30’s, had looked very like the equation for the other. It was a coincidence. The two fields were entirely unconnected, except at one point: Maxwell’s Demon. As the Demon sat and sorted his molecules into hot and cold, the system was said to lose entropy. But somehow the loss was offset by the information the Demon gained about what molecules were where” .

Pynchon stuffs fifty years of Information Theory and its mutagenic consequences for Maxwell’s Demon in this tidy paragraph. No wonder Oedipa’s head is swimming. In short, we now have two entropies: one thermodynamic, and one informational. Their equations look the same, but they are not the same. Somehow, Maxwell’s Demon connects them.

Nefastis concludes, “Entropy is a figure of speech, then…a metaphor. It connects the world of thermodynamics to the world of information flow. The Machine uses both. The Demon makes the metaphor not only verbally graceful, but also objectively true.” Feeling like a “heretic,” Oedipa asks, “But what… if the Demon exists only because the two equations look alike? Because of the metaphor?” [3]

And this is the crux of the issue. Very much like metaphor, which operates based on similarities and differences between two objects, the differences between information entropy and thermodynamic entropy matter. In fact, N. Katherine Hayles made this very point in her book, Chaos Bound [4]. We cannot collapse these two entropies into sameness, even if their differences are culturally and mathematically suppressed.

So, I wanted to begin with The Crying of Lot 49 because it provides us with a vivid image of the dilemma: the allegory as a shifting, changing, and (not quite) material unit. Maxwell’s Demon in the Nefastis Machine is a blackboxed variety of what was originally a neat thought experiment about the second law of thermodynamics. Now we have a Maxwell’s Demon, purportedly a concrete agent that we can never see but that definitively communicates with some, but not all, individuals about the locations of gas molecules. The Nefastis Machine belongs to Pynchon’s commentary on the information model of entropy. He intertwines thermodynamics and information theory not as natural bedfellows but as paradox: perpetual motion through communication where it is thermodynamically impossible.

We are now in a position to understand how entropy acquired another definition, as information, and how Maxwell’s Demon figures in that scientific and cultural shift.

Reconfiguring Entropy: From Boltzmann to Shannon

Let’s backpedal now, returning to the Victorians. Recall from Part I that Maxwell engaged in a lengthy correspondence with Rudolf Clausius about the kinetic theory of gases. It was Maxwell who extended Clausius’s work on molecular physics and thermodynamics.

There was another important physicist, Ludwig Boltzmann, who adopted Maxwell’s premise that the second law of thermodynamics had only statistical certainty, and who derived a formula called the “H theorem” to describe the statistical distribution of thermodynamic molecular motion [5]. Boltzmann’s stance on the thermodynamics of microstates adjusted through the years; but by the 1870s he favored a statistical over a mechanical model, thus shifting entropy into the domain of probability.

The H theorem supplied a proof of the second law of thermodynamics using probability calculus. In Jos Uffink’s analysis of this initial iteration, Boltzmann had not yet maneuvered his way into probability theory; rather, he marshalled the usefulness of probability calculus to further the description of a mechanical theory of entropy [6]. Nevertheless, he was inching towards his eventual description of entropy as a statistical measure of randomness in a closed system. The more random or dispersed the state of the molecules, the higher the entropy.

At this point, we need to make an important distinction. For Clausius, the scientist who initially coined the term “entropy,” a hot gas is more entropic because its molecules move faster and it therefore undergoes a swifter thermal exchange. For Boltzmann, however, the hot gas is more entropic because faster-moving molecules intermix more thoroughly, producing a more random configuration.

The more arrangements, the more randomness, the more entropy.

In 1929, Maxwell’s Demon returned to the conversation when Leo Szilard argued that, in order to sort molecules, the Demon needed a “kind of memory.” Sitting in his chamber, he needs to remember where fast and slow molecules are located [7]. Leon Brillouin famously took up the “memory” question in 1951. His paper, “Maxwell’s Demon Cannot Operate” [8] argues that the Demon can’t do his sorting job at all because the vessel he lives in is too dark. In technical terms, it has the radiation properties of a “blackbody,” or something that radiates as much energy as it absorbs. If we equip the Demon with a headlamp, on the other hand, he can see his molecules and sort them. But by doing this, we also introduce a new source of illumination in the system. The system must now absorb this new radiation, and so the information the Demon acquires is offset by an increase in entropy.

Where we once had a “neat-fingered being,” we now have a Demon with a headlamp. Most importantly, Brillouin’s paper concludes that information and entropy are connected. As Hayles summarizes, “the potent new force of information had entered the arena to combat entropy” [4].

In an 1987 article in Scientific American [9], Charles H. Bennett claimed that the Demon doesn’t necessarily need a headlamp. This is a question of memory storage, not of measurement, he argued. Because the Demon needs to remember the measurements he makes, at some point he will also need to clear out that space to make room for more data. The destruction of that information results in entropy increase.

Bennett’s point is significant because it signals a shift in the imagination of entropy. No longer is entropy attached to Victorian anxieties of the universe running down and growing cold; no longer are we awaiting an apocalyptic “heat death” as prophesized by our scientific authorities in one sweeping cosmological gesture. Now we are dealing with the fear of information pile-up, until, as Hayles puts it, “[information] overwhelms our ability to understand it” [4].

But Bennett did not bring us to the thermodynamic/information isomorphism that Pynchon grapples with and satirizes in The Crying of Lot 49. That move we owe to Claude Shannon.

In 1948, an engineer at Bell Laboratories named Claude Shannon published a paper titled “A Mathematical Theory of Information” [10]. This two-part paper issued an argument that became the foundation of what we call Information Theory.

Simply, Shannon argued that information and entropy were the same thing.

Shannon based this claim on the fact that his equation for entropy took the same form as Boltzmann’s equation for entropy. Hayles calls this “Shannon’s Choice” [4], i.e., the choice to equate these two entropies based on the similarities of their equations, despite a crucial gap in meaning.

Let’s think about what she means here. For Shannon, less information means less entropy. It defines the state of a system that is easy to predict, that does not surprise us much. This is like saying you have a drawer of 10 pairs of socks, but 8 pairs are black. If you reach into the drawer with your eyes closed, there is a high probability that you will grab a black pair. That’s low entropy.

But for Boltzmann, choice has nothing to do with entropy. Entropy probability is derived from not knowing the microstates of a system. Here, think about that slow-moving, low-entropy gas. The molecules are less intermixed, and so we know more about them. It is easier to make predictions.

Obviously, there is a difference.

Circling Back:

Remember that in Pynchon’s novel, Nefastis told Oedipa, “Entropy is a figure of speech, then… a metaphor. It connects the world of thermodynamics to the world of information flow. The Demon makes the metaphor not only verbally graceful, but also objectively true.”

Having glossed a history of information entropy, I want to return to this moment one last time to consider Nefastis’s comment.

Let’s start here: “Entropy is a figure of speech.”

Entropy is a term coined by Rudolf Clausius in 1865, in part chosen for its similarities to “energy” (an ancient word, then new to science: see The Trouble with Defining Energy). Clausius chose and designed this word as part of a deliberate agenda to codify the two laws of thermodynamics.

Therefore, we would say that entropy “actually” is a material-discursive entity. That is, while we cannot reduce entropy to simply a figure of speech, simply a metaphor, entropy is also not an objective thing, out there in the universe, that Clausius and his colleagues “found” and “discovered.”

Entropy, like energy, is both physical entity and discursive construction. It is what Donna Haraway (and other feminist Science and Technology Studies scholars) would call a “natureculture,” or the entanglement of natural phenomena and the historical/cultural/semiotic practices through which we make sense of natural phenomena [11].

So, Nefastis is not quite right here; but he’s also not completely wrong. Particularly, what makes entropy so perplexing is its transformation along cultural and historical fault lines. In the Victorian era, Thomson and his colleagues weaponized entropy against the secular materialists like Darwin and Tyndall in order to shore up support for a North British theological agenda in physics. In the Information Age of the twentieth century, however, the cultures shifted; the anxieties shifted. Shannon could argue that information and entropy are the same thing based on mathematical isomorphism, but also because, as Bennett argued, information pile-up was a novel cultural threat.

And where is the Demon in all of this?

“The Demon makes the metaphor not only verbally gracefully, but also objectively true.”

What does it mean to make a metaphor “objectively true”? If you think about it, a metaphor can never be objectively true. It’s anathema to figurative language. If I say, “my love is a fire,” let’s truly hope my love is not literally fire. In fact, it can’t be; that makes no sense. This metaphor operates because love and fire are different entities. We see a fire in our mind’s eye. We know how hot it is, how it burns, how we can kindle a fire or let it rage out of control. All that sensory richness we pack into the word “fire” and then attach it to the word “love.” Fire is the concrete anchor, and love is the floating abstraction that we pin down with that anchor. And, in doing so, a sort of magic happens where “love” acquires new dimensions. But love is never literally fire.

Returning to the Demon, then, what does it mean when Nefastis says that Maxwell’s Demon makes the metaphor of entropy objectively true? I would argue that Pynchon is alerting us to the reality that has emerged from a heuristic of representation. In fact, most of The Crying of Lot 49 questions what is “real” and what is representation. In the case of Maxwell’s Demon, though, the original metaphor that Maxwell used in his letter to Tait is stacked beneath other metaphors. In other words, the “anchor” of the metaphor is no longer a concrete entity, but rather another metaphor. The Demon becomes its own anchor: the sorting Demon is now a headlamp-wearing Demon. From here, new science and cultural context emerge intertwined. To my mind, that’s what Pynchon is getting at.

So, the Tale of Two Entropies is still unfolding, particularly because contemporary data science uses the concept of information entropy as a modality that will inevitably shift as time progresses. Again, as a Victorianist, I don’t claim familiarity with that domain of knowledge; however, I think that tracing out the historical trajectories of scientific metaphors is a useful exercise because it reveals how figures like Maxwell’s Demon mediate between our desires and our measurements in and of the world around us.


[1] Clarke, Bruce. Energy Forms: Allegory and Science in the Era of Classical Thermodynamics. University of Michigan Press, 2001.

[2] Pynchon, Thomas. The Crying of Lot 49. 1965. HarperCollins Publishers, 1966.

[3] Maxwell, James Clerk. Matter and Motion. 1876. Macmillan, 1920.

[4] Hayles, N. Katherine. Chaos Bound: Orderly Disorder in Contemporary Literature and Science. Cornell University Press, 1990.

[5] Harman, P.M. Energy, Force, and Matter: The Conceptual Development of Nineteenth-Century Physics. Cambridge University Press, 1982.

[6] Uffink, Jos, "Boltzmann's Work in Statistical Physics", The Stanford Encyclopedia of Philosophy (Spring 2017 Edition), Edward N. Zalta (ed.), https://plato.stanford.edu/archives/spr2017/entries/statphys-Boltzmann/.

[7] Szilard, Leo. 1929. “On the Reduction of Entropy as a Thermodynamic System Caused by Intelligent Beings.” Zeitschrift für Physik 53: 840-856.

[8] Brillouin, Leon. 1951. “Maxwell’s Demon Cannot Operate: Information and Entropy. I.” Journal of Applied Physics 22 (March): 334-357. https://doi.org/10.1063/1.1699951

[9] Bennett, Charles H. 1987. “Demons, Engines, and the Second Law.” Scientific American 258 (November): 108-116. https://www.scientificamerican.com/article/demons-engines-and-the-second-law/.

[10] Shannon, Claude E. 1948. “A Mathematical Theory of Information.” Bell Systems Technical Journal 27 (July and October): 379-423, 623-656.

[11] Haraway, Donna, and Thyrza Goodeve, How Like a Leaf: An Interview with Donna Haraway. Routledge, 1999.

See also: Latour, Bruno. We Have Never Been Modern. Translated by Catherine Porter. Harvard University Press, 1993.

Photo at Physicsforme

Update: Check out Part 2 of this series here

Even if you don’t know what it is, you have probably heard of “Maxwell’s Demon.”

Like Schrödinger’s Cat, Maxwell’s Demon circulates in the cultural imagination more potently than the actual thought experiment it originally signified. The brainchild of James Clerk Maxwell in 1867, what we call “Maxwell’s Demon” has survived more than a century of technological transitions and remains wedded to the concept of “entropy,” though what that means, exactly, is complicated and differs from its Victorian origins.

In this series of two postings, we will explore the birth and evolution of Maxwell’s Demon. This first post lays the historical, political, and conceptual groundwork for a subsequent discussion of what entropy means to basic Information Theory (Part 2). Depending on your background, or from where you’ve acquired familiarity with the term “entropy,” you might associate this word with one of two very different registers: on the one hand, entropy signals disorder, dissipation, and “heat death.” On the other hand, it also means information, “equivocation,” and the destruction of information as memory.

What feels like a tension between these groupings of definitions is really more like a point of rupture in cultural association, where a Victorian allegory (the demon) gets imported into the information age and is forced to fine-tune how he mediates between cultural anxiety (or desire) and physical theory. What is fascinating about Maxwell’s Demon, I think, isn’t as much what Maxwell’s original thought experiment was, or how that concept mutated to fit the twentieth century, but rather how allegory performs those operations across time.

So, what follows is a Tale of Two Entropies, and the story of how Maxwell’s Demon remains the hinge point of two different, yet generative, scientific applications.

Introducing Thermodynamic Entropy

Maxwell’s Demon is a thought experiment designed to reveal the statistical nature of the second law of thermodynamics. In order to understand what the demon does, we need to establish a basic understanding of this law, and the kind of cosmological resonance it had in the mid-nineteenth century.

The second law of thermodynamics (sometimes referred to as the “entropy law” or the “law of dissipation”) places a directionality constraint on energy transfers. It states that, in a closed system, the amount of work-available energy moves down a gradient from availability to diffuseness. This is why restoring a closed system to a higher state of order requires an outside input of energy; or, taking the universe as a closed system, this is why the universe naturally drifts towards a state of cold, workless equilibrium, or “heat death.” This is also why your coffee will not spontaneously reheat itself, and why perpetual motion machines (i.e., getting work output for nothing) cannot exist. The arrow of time is inextricable from such processes.

In 1865, Rudolf Clausius attached a new word to such a dismal concept. He coined the term, “entropy,” to denote the energy unavailable for work production, a value which necessarily increases over time in any closed system. Clausius arrived at “entropy” partly because it sounded like “energy,” and partly because its roots include the Greek word for transformation [1]. For Clausius and his contemporaries, entropy was a measure of the disorder in a system.

One of those contemporaries was William Thomson (Lord Kelvin), who, like Clausius, published an interpretation of the two laws of thermodynamics. Thomson formally synthesized his own findings with those of James Prescott Joule, Clausius, and Macquorn Rankine; and, in 1852 Thomson presented a short series of papers to the Royal Society of Edinburgh that clarified what he called the “dissipation” of mechanical energy as a universal tendency in nature [2].

“Dissipation” was as much a mood as it was a state of matter. Thomson selected “dissipation” to describe the energetic tendency to drift towards diffuseness; but dissipation was also a common nineteenth century term used to modify subjects as wasteful, unproductive, morally depraved, or frittered out [3]. The world and its irreversible processes were getting less and less productive, Thomson argued. And it was up to man (read: the British) to direct each transfer of energy in the most useful, work-extractive manner possible. Moreover, Thomson attached a cosmological, Judeo-Christian reading to the inevitability of universal heat death. In his 1862 lecture, “On the Age of the Sun’s Heat,” he weaponized the logic of thermodynamics against Charles Darwin and his secular colleagues by calculating (incorrectly, as it turned out) the age and fate of the sun. Thomson declared [4],

“It seems, therefore, on the whole most probable that the sun has not illuminated the earth for 100,000,000 years, and almost certain that he has not done so for 500,000,000 years. As for the future, we may say, with equal certainty, that inhabitants of the earth cannot continue to enjoy the light and heat essential to their life, for many million years longer, unless sources now unknown to us are prepared in the great storehouses of creation”

Such an end-days vision of entropy or dissipation brought a sense of material finality to traditional Christian cosmology. It certainly worked to shut the secular materialists up for a while.

So, when James Clerk Maxwell introduced entropy as a statistical law, his thought experiment – “Maxwell’s Demon” – became the center of a thermodynamic controversy.

“La miserable race humaine périra par le froid” in La Fin du Monde, by Camille Flammarion (1893) at Archive.org

Maxwell Invents a “Being”; Thomson Creates a “Demon”

It’s not that Maxwell was opposed to Thomson’s theological agenda; and in fact Maxwell’s own Anglo-Scottish background perpetuated his tendencies to reify a Platonic sense of spiritual ideal in mathematical and physical law. However, Thomson and Maxwell happened to approach the entropy law from different perspectives.

Maxwell had been corresponding with Clausius for some time on the molecular behavior of gases. Specifically, where Clausius introduced several important and novel concepts to illustrate how the temperature of a gas can be described in terms of its energy, Maxwell raised the stakes of Clausius’s kinetic model by arguing that a statistical method must replace a strict dynamical method of calculating molecular motion [5].

This means that, because spontaneous fluctuations in the motions of individual molecules are always occurring, we can only ever talk about molecular averages. In the deep cold of space, for instance, there are individual molecules zooming about with heat energy. However, because the average molecular motion remains so tiny, those zooming outliers do not represent what we perceive. What this means for the second law of thermodynamics, more importantly, is that it has only statistical certainty, or that the law itself describes the properties of a system, but does not describe the properties of any individual molecule in that system.

Let’s think about what that means for Thomson’s elaborate, apocalyptic prescription for the end times. While defining entropy as a statistical law does not endanger its “truth” (the second law of thermodynamics has never been “in danger”), it does question the absoluteness of entropy. It certainly asks us to think about the finality of “heat death” differently. Even if we can’t extract work from the universe, or from any closed system, because it has reached thermodynamic equilibrium, we can only ever describe heat energy in terms of its macrostates. Maxwell insisted that it was impossible to go pointing at this or that molecule and report on its individual energy. Certainly some individual molecules are moving faster than their average macrostates. This fact doesn’t undo entropy, of course, but it confines the second law of thermodynamics to the realm of statistics. All this talk of being “statistically certain” dilutes the cosmic register of Thomson’s apocalyptic heat death.

Enter the demon.

In an 1867 letter to Peter Guthrie Tait, another British thermodynamicist, Maxwell illustrated his statistical argument with a thought experiment in which a “neat-fingered being” “knows the paths and velocities of all the molecules” in a chamber, but “can do no work except open and close a hole in the diaphragm by means of a slide without mass” [6]. In his 1871 Theory of Heat, he elaborated on this concept [7]:

“Now let us suppose that such a vessel is divided into two portions, A and B, by a division in which there is a small hole, and that a being, who can see the individual molecules, opens and closes this hole, so as to allow only the swifter molecules to pass from A to B, and only the slower ones to pass from B to A. He will thus, without expenditure of work, raise the temperature of B and lower that of A, in contradiction to the second law of thermodynamics”.

The being monitors the microstates of a thermodynamic system and acts as an internal agent, vetting the pathways of molecules based on their respective energies. By preventing the entropic drift from a hotter to a colder state, the being can thus extract continuous work from such an engine. It operates as a perpetual motion machine, giving us work for nothing. Simply by observing the movements and ordering these molecules into different chambers, the being prevents thermodynamic equilibrium. Of course, there are no tiny beings with massless, frictionless doors; and so we don’t know the energy states of individual molecules. Entropy remains statistically reliable.

You may have noted that Maxwell did not call his being a “demon.” In 1879, Thomson delivered a lecture titled, “The Sorting Demon of Maxwell,” in which he described (and popularized) Maxwell’s molecule-sized homunculus in far greater detail than Maxwell ever did [8]. In fact, Maxwell rejected Thomson’s moniker for his being, urging his colleagues to think of it more as a valve than a demon [5].

But Thomson’s appropriation of the term, “demon,” is adequate, considering what “daemonic” figures do.

Bruce Clarke’s reading of scientific allegory offers up a neat definition: “Daemons are agents of communication, typically taking the shape of messengers, or guardians, or other figures of admonition. The crucial thing is that they can take whatever shape the larger conceptual scheme demands: they are inherently metamorphic.” The daemon operates by “bridging conceptual gaps and bearing important cultural information” [6].

The daemon balances abstract concepts, theories, and desires with concrete images or modeling. And, in the interstices of those physical phenomena and cultural desires, a concept – like entropy – can harden into something new. Indeed, this is exactly the function Maxwell’s Demon occupied in Victorian scientific culture. As an agent of communication, the demon performs a physical impossibility to undercut the theological absolutism of the entropy law, and, with it, the authority of William Thomson’s projection for the end of life on earth. Clarke suggests that Maxwell’s Demon gestures to a “desire to secure conceptual salvation from the finality of that last judgment” [6]; yet, I wonder whether the being/daemon/demon wasn’t performing more of Maxwell’s desire to concretize the limitations of the second law of thermodynamics, particularly from the perspective of (human) beings less “clever” and “neat-fingered” than that of his thought experiment. Maxwell was not irreligious; he, like Thomson, maintained that only God could restore energetic order and prevent thermodynamic equilibrium. But by jettisoning the dynamical theory of gases, he also recognized that we can only ever calculate that entropic drift with statistical certainty.

The Demon Evolves

Despite Maxwell’s intention for his Demon, several of his colleagues did feel threatened by the implications of the thought experiment. They worked deliberately to tame the Demon back into Thomsonian cosmology.

In particular, Tait (the recipient of the letter where Maxwell originally described his Demon) and another scientist, Balfour Stewart, anonymously published a notorious volume called The Unseen Universe or Physical Speculations on a Future State (1875). Like most Victorian texts, The Unseen Universe belongs to the public domain and can be found here. If you have time, you really should explore this gem of whackadoodle logic. Think of it as you might Ancient Alien theory: a little bit of scientific popularizing, a whole lot of far-fetched speculation, and packed with politics about the authority of science.

The Unseen Universe was published in response to John Tyndall’s infamous “Belfast Address.” In the autumn of 1874, Tyndall delivered the address to the British Association for the Advancement of Science, arguing in favor of secular materialism. Tyndall argued that only science, not religion, offered a way to learning the “true” nature of phenomena [9]. It’s important to understand here that he was contributing to an ongoing debate on how to define the contours of the scientific establishment. That is, what counts as legitimate science? How should we (the British) define disciplinarity? Where should religion fit here, if at all?

As traditional Christian moralists, Stewart and Tait countered in The Unseen Universe that indeed the physical laws of science pointed to a real yet invisible spiritual reality [10]. They co-opted Maxwell’s Demon, turning it into an “army” of demons with many doors, operating against entropy seemingly at the command of a higher intelligence. By altering the narrative, the Demon becomes assimilated into Thomson’s apocalyptic entropy: in The Unseen Universe, we can not escape entropy because, as Clarke puts it, “the fallen material constitution of the world is bound to foil any attempt we might make” [6].

It’s important to understand that this is different from Maxwell’s original interpretation. Maxwell never intended his “being” to offer the possibility of escape from the second law of thermodynamics. Yet the “many demons” narrative twists the Demon into a fallen hero who tries yet cannot outwit the Law of Nature.

Born in 1867, Maxwell’s “neat-fingered being” had already evolved by 1875. By the dawn of the information age, he would evolve much more. This is what we will explore in the second part of “A Tale of Two Entropies”: what Maxwell’s Demon did for Information Theory.


[1] Clausius, Rudolf. The Mechanical Theory of Heat, with Its Applications to the Steam-Engine, and to the Physical Properties of Bodies, edited by T. Archer Hirst, Introduction by John Tyndall, London: John Van Voorst. 1867.

[2] Thomson, William. “On a Universal Tendency in Nature to the Dissipation of Mechanical Energy,” in The Philosophical Magazine 4, 1852.

[3] “dissipated, adj”. OED Online. September 2020. Oxford University Press.

[4] Thomson, William. “On the Age of the Sun’s Heat.” Macmillan’s Magazine, March 1862, 388-93.

[5] Harman, P.M. Energy, Force, and Matter: The conceptual Development of Nineteenth-Century Physics. Cambridge University Press, 1982.

[6] Clarke, Bruce. Energy Forms: Allegory and Science in the Era of Classical Thermodynamics. University of Michigan Press, 2001.

[7] Maxwell, James Clerk. Theory of Heat. 1871. Ninth Edition. Longmans, Green and Co., 1888.

[8] Thomson, William. “The Sorting Demon of Maxwell.” Proceedings of the Royal Institution. Vol. ix, February 28, 1879. 113.

[9] Tyndall, John. Address Delivered Before the British Association Assembled at Belfast. Longmans, Green, and Co. 1874.

[10] Stewart, Balfour, and P.G. Tait. The Unseen Universe of Physical Speculations on a Future State. Macmillan, 1875.

Photo by Petar Milošević / CC BY-SA

There is a short answer and a long answer to this question. The short answer is simple: the steam engine predates thermodynamics. But the long answer is much more interesting when we consider the cultural and industrial variables at play during the birth of energy science.

Simply, the steam engine predates thermodynamics by over a century. In fact, its practical and industrial applications drove the development of what became thermodynamics in the mid-nineteenth century.

The implications of this point are worth unpacking. We are somewhat used to thinking about theoretical science as the driver of experimental and applied sciences. That is, we might be tempted to believe that scientists develop a theory and then apply it to produce new technologies. The history of science shows us that this is not always how science operates. In the case of thermodynamics, theoretical physics certainly did not steer the course of what became the science of energy physics. Industry did. This post provides a brief overview of the development of the British steam engine, and how the science of thermodynamics emerged from the problems produced by industrial shifts. Most importantly, though, I explain how these laws were not just discovered, but rather addressed specific worldviews, technological complications, and material questions tied to commerce, imperialism, and the ideal of progress. This is important to understand because such ideals set the tone for writing energy into scientific natural law, and for how, almost two hundred years later, we continue to imagine energy and its possibilities.

The Invention of the Steam Engine

The Victorian period was the age of steam power. It was the steam engine that released textile mills and other industry from water power’s geography-dependent architectures, driving populations to new, smoky factory cities. But this iconic steam imaginary owed nothing to thermodynamics at the outset. In fact, it was the other way around.

Historians trace modern mass politics and ways of living to industrial organizations of fossil fuel energy [1]. England burned coal as early as the thirteenth century, yet it wasn’t until what we call the “Industrial Revolution” (the mid-eighteenth century to the mid-nineteenth century) that humans transitioned to a coal-based energy system. Until then, steam engines consumed more fuel than they could extract from England’s water-filled coal mines; but after improved engines abetted mining and industrial iron production, England began to harness its network of waterways to cheaply transport coal. This self-reinforcing system of geography and industry is why, rather than superior technology or innovative ability, Britain’s development “diverged” from other parts of the world such as China, Japan, and India [2].

As early as the 1690s, inventors played around with steam-pressured pumps. Thomas Savery patented his “engine to raise water by fire” during this period and published an accompanying text, The Miner’s Friend, in 1702 [3].

Newcomen Engine System: Photo by Joost J. Bakker from IJmuiden

Savery’s steam pump was not particularly useful for coal extraction, however. It was Thomas Newcomen (1663-1729) whose atmospheric steam engine was effective enough to address the demand of Cornish miners [4]. Newcomen’s engine worked by letting low-pressure steam move from a boiler into a cylinder directly above it. As soon as the piston reached the top of this cylinder, the engine operator sprayed cold water directly into the cylinder, condensing the steam. By doing this, the operator created a partial vacuum which allowed the atmospheric pressure to push the piston downward. Understanding how the Newcomen engine works helps us appreciate James Watt’s contribution to this history.

But wait! Didn’t James Watt invent the steam engine? Obviously no, he did not; though “inventor of the steam engine” is a title that histories do sometimes bestow upon Watt. It was Watt, however, who separated the boiler and condenser chambers in his engine, and that made a huge difference in engine efficiency. Additionally, Watt built a partnership with Matthew Boulton, who we can think of as the commercial mastermind of the steam engine. Boulton found a market beyond Cornwall miners: the factory. He realized that mill owners were desperate to add energy-saving technologies to their machinery, and Watt’s engine delivered [4]. By the early 1800s, inventors were beginning to experiment with other energy-saving tweaks, which brings us back to the question of thermodynamics.

Sadi Carnot and the First Stirrings of Thermodynamics

If you’ve studied thermodynamics or taken a heat transfer course, you likely learned about the “Carnot cycle.” In this classic model, a system moves through a series of what are called adiabatic and isothermal stages, returning to its original state in the completion of one “cycle.” Thus, the cycle is theoretically reversible, performing mechanical work on its surroundings with maximum efficiency. It isn’t important to understand the ins and outs of the Carnot cycle for this discussion, but it might be helpful to see it represented graphically. We calculate the mechanical work of the Carnot cycle by measuring the area enclosed in the curve created by the changes in volume and pressure.

A Typical Carnot Cycle Represented Graphically: Photo by E. Generalic

What you may not know is that the Carnot engine cycle nearly slipped into obscurity after Sadi Carnot’s early death; and, were his writings not excavated by a few key individuals, thermodynamics may have failed to materialize under the precise conditions that it did.

Sadi Carnot was a French military scientist, following the footsteps of his father, Lazare Carnot (1753-1823), who served as one of Napoleon’s leading scientists and was forced into exile after the restoration of the French monarchy in 1815 [4]. In the summer of 1824, Sadi published a slim volume titled Reflections on the Motive Power of Fire and on Machines Fitted to Develop that Power. Few read this book during Carnot’s lifetime, and it was nearly lost forever after his death in 1832. However, a French engineer named Émile Clapeyron uncovered Carnot’s text and recast many of his verbal arguments into mathematical form in 1834. From this, the “Carnot Cycle” captured the attention of scientists trying to retool the steam engine into a more efficient machine.

One of these scientists was William Thomson (later Lord Kelvin), who combined Carnot’s and the young English physicist James Prescott Joule’s ideas to carve out what is arguably the first iteration of the laws of thermodynamics. Thomson wasn’t alone in doing this work: classical thermodynamics was a discipline built from the collaborations of many nineteenth-century scientists. But it is important to understand how and why Carnot’s insights motivated Thomson to codify thermodynamics.

What Carnot gave us was a powerful metaphor for conceptualizing heat transfer. Carnot argued that a steam engine works like a water wheel: a water wheel produces work because water falls through a height. Similarly, he said, we extract work from a steam engine because heat falls from a higher temperature to a lower one. You can maximize work extraction by raising the height that water falls. Likewise, allow heat to “fall” through a larger temperature differential and you can maximize work output [5].

Let’s take a moment to think about this revolutionary concept.

Waterwheel: photo by Pikist

At the time of Carnot’s writing, scientists thought that heat was an invisible fluid called “caloric.” Much like water moving from one height to another, the amount of caloric was thought to remain constant as it moved from one temperature to another. In passing from the boiler to the condenser, caloric simply transferred from a concentrated to a diffuse state. It did not disappear. The water wheel metaphor tracks in this regard: we move the water from a high state to a low state, but the amount remains consistent throughout the cycle.

Despite that heat is not a substance, and despite that scientists knew this by the time Thomson pored over Carnot’s work, Carnot more or less demonstrated the principle of entropy. He argued that the amount of work a steam engine can produce is limited by the temperature differential, or the difference between its hottest and coldest temperatures. And thus, heat always flows from a hot temperature to a colder one: we can’t make it flow backwards.

The banality of a statement like this today makes us lose our respect for its revolutionary potential in a pre-thermodynamics world. Carnot was demonstrating that there is an upper limit on how efficient an engine can be, and he argued that his proposed cycle was that limit.

So, we have our answer: the steam engine came first, and it inspired the science of thermodynamics. In fact, William Thomson and his brother James were deeply wedded to Carnot’s metaphor of the waterwheel, and they even built models to study Carnot’s approach. James Thomson’s practical training in marine engineering and steam engines directed both Thomson brothers to the problem of industrial waste, or machine power consumption.

This is where the intersection of thermodynamics and steam engines is particularly important. William Thomson admired Carnot not just for his scientific insights, but because Carnot believed that with an “ideal” engine, France might efficiently “carry the fruits of civilization over portions of the globe where they would else have been wanting for years” [5]. He was a favorite among scientists and engineers who puzzled out the problem of industrial work losses because a win for industry was a win for the nation.

Carnot envisioned, for France rather than England, that revolutionizing the steam engine cycle would enable miners to release far more coal energy from the ground than an engine expended. In Carnot’s words, “The most signal service that the steam-engine has rendered to England is undoubtedly the revival of the working of the coal mines, which had declined, and threatened to cease entirely, in consequence of the continually increasingly difficulty of drainage, and of raising of the coal… To take away today from England her steam-engines would be to take away at the same time her coal and iron. It would be to dry up all sources of wealth, to ruin all on which her prosperity depends, in short, to annihilate that colossal power” [5]. For Carnot, the military scientist, this was an imperial mission, and a civilizing mission. The nation with the easiest access to coal would have the strongest military, the tools to conquer and occupy remote lands, and the industrial power to funnel resources back from the periphery and commercialize them.

Of course, Britain accomplished all of that, reaching the zenith of its imperial status by the end of the nineteenth century. Consider that coal abundance in Britain freed up agricultural populations whose land had previously supplied fuel and food. Unfettered by the seasonal and geographical limitations of waterpower, industry thrived in urban locations, where populations grew dense during the remainder of the nineteenth century. As labor forces turned increasingly to producing industrial goods, Britain relied on its peripheral territories for food and raw materials. Without such uncompensated labor and lifeways, Britain could not have sustained its growth and imperial status. This is a conversation beyond the scope of the present discussion, but it is too important not to mention.

Therefore, we are left with an easy answer with far more complicated resonances. The steam engine predates thermodynamics, and therefore it was what we think of as “applied science” that precipitated the development of classical energy physics. Yet a crucial combination of imperial expansion, industrial necessity, and colonial violence produced thermodynamics, rather than the detached discoveries and observations of scientists. To my mind, the cultural forces involved in the birth of thermodynamics are some of the most important reminders that energy science always was, and remains, wedded to the ideals of extraction, accumulation, and exploitation.


[1] Mitchell, Timothy. Carbon Democracy: Political Power in the Age of Oil. Verso, 2011.

[2] Pomeranz, Kenneth. The Great Divergence: China, Europe, and the Making of the Modern World Economy. Princeton University Press, 2000.

[3] Savery, Thomas. The Miner’s Friend; or, An Engine to Raise Water by Fire. 1702. Reprinted London: W. Clawes, 1827.

[4] Hunt, Bruce J. Pursuing Power and Light: Technology and Physics from James Watt to Albert Einstein. Johns Hopkins University Press, 2010.

[5] Carnot, Sadi. Reflections on the Motive Power of Fire and on Machines Fitted to Develop that Power. 1824. Translated by R.H. Thurston. Edited by Eric Mendoza. Dover Publications, 1960.

“Energy was not out there in the world waiting to be found, a fact of nature finally revealed to human consciousness”

Cara Daggett; The Birth of Energy

When I want to draw out the capaciousness of the term, energy, I ask my students to turn to one another and define it. What is energy? If an individual from an alien planet needed a description of energy as Earthlings know it, what would you tell them? What does it look like? Smell like? Feel like?

As you can imagine, this exercise elicits responses running the gamut from the standard scientific definition of energy (i.e., the ability to perform work) to creative descriptions like, “a child running around on a sugar high,” to a run-down of the chakra system. Regardless of the responses I receive, each group of students contributes a unique collection of definitions. Of course, that is my point: that energy has no singular definition.

Energy is somehow everything and nothing all at once. We learn that everything is energy; that energy is constantly operating in transfer to do things like keep us alive, fuel our machines, heat the planet. But energy is also, ironically, nothing, in that it is almost indescribable. If you attempt to describe what energy is on an elemental level, you will find yourself struggling to some degree. Richard Feynman famously lectured that “we have no knowledge of what energy is. We do not have a picture that energy comes in little blobs of a definite amount” [1]. So, why does energy feel cosmic and universal if it is, in fact, a floating signifier? Where does this term come from, and why does it have so many applications?

On Etymology: Energy Is an Old Word, but Not for Scientists

Energy began as a rhetorical term. We can trace it all the way back to Aristotle, whose energeia (ἐνέργεια) is a combination of the Greek en- (“in”) and -ergon (“work”) [2]. Related to energeia, Aristotle discusses two senses of the Greek term dunamis, which denotes possibility, potential, and the power for change. For Aristotle, there is a kinêsis, or a movement, but also a second meaning of dunamis where the capacity for change exists [3]. You might see here a resemblance to kinetic energy, the energy of an object’s motion, and potential energy, or the energy of position. If so, you’re not wrong, but it would still take centuries before energy found its way into scientific canon.

Aristotle’s energeia is the foundation for the Latin word energia, the French word énergie, and the English word energy. The latter two originated in the sixteenth century and connect work to virtue and goodness [2].

A quick look at the Oxford English Dictionary [4] is also eye-opening. The OED entries for energy include:

  • “With reference to speech or writing: Force or vigour of expression.”
  • “Impressiveness (of an event).”
  • “Exercise of power, actual working, operation, activity; freq. in philosophical language.”

None of these definitions is scientific, and all bear traces of energy’s classical associations. In fact, displacing energy from physics may render the term poetic or peculiarly moralistic. If you’ve ever read William Blake’s The Marriage of Heaven and Hell (1790), you’ll recall that Blake’s Devil has much to say about energy. “Energy is eternal delight,” akin to physical passions. Reason, contrasted with energy, is like intellection. As such, Blake’s Devil comes down firmly on the side of energy [5].

It makes sense that energy is the bedfellow of British Romanticism, the school to which Blake belongs. The Romantics emphasized the power and importance of the individual spirit, and the sublimity of the imagination. Art and poetry originate from this profoundly energetic locus at our core, producing the famous “spontaneous overflow of powerful feelings” that Wordsworth tells us are recollected in tranquility.

But then, where does that leave physics? If Blake’s Devil was contrasting energy and reason at the end of the eighteenth century, when did science appropriate energy for its own agenda?

When and How Did Energy Become a Scientific Term?

Energy had no formal place in physics until the late 1840s, when a group of northern British scientists appropriated the term to unify a range of natural phenomena including heat, light, and electromagnetism. In addition to heuristic unification, these scientists, William Thomson (later Lord Kelvin) and James Clerk Maxwell included, were motivated by a desire to secure institutional authority for their mostly Scottish Presbyterian cohort [6]. They selected the word energy to reorient physics towards the areas of research that studied transfers and potentials. The Newtonian term force, previously used in these cases, was just too limiting to perform the unifying function that energy promised. Thus, energy was scientifically defined as “the ability to perform work.”

Energy now had a scientific home; yet the process of codifying energy wedded the political and theological agendas of these scientists to an emerging concept of how fuel and labor should be managed. In other words, defining energy in science was a political and historically specific act, not purely the outcome of detached, objective investigation. This point should be underscored, and it is the topic of another (future) post.

Because energy had centuries of humanistic usage behind it by the 1840s, these prior associations layered into the nascent science of energy. As you can imagine, this overdetermined thermodynamics during its crucial years of codification, and remains one of the reasons why we, my students included, can find so many definitions for a single term.

But then, why would nineteenth-century scientists want a term that already existed? Why wouldn’t they just create a new one?

There were several justifications for choosing an extant, patently non-scientific word to generate an entirely novel branch of science. Consider the following:

  1. In the mid-nineteenth century, “science” did not remotely resemble the highly-disciplinary, extensively-institutionalized systems we now have under that name. The differentiation of scientific disciplines was a late-nineteenth century affair. For most of the century, and long before, Britain’s educated persons received training in the classics. Garnering authority for a new scientific regime meant convincing an existing intellectual public that such an enterprise mattered. And so, of course, reaching back for energy’s Greco-Roman pedigree made quite a bit of sense.

  2. Recall that energy was also a poetic term. In the nineteenth century, it was not uncommon for scientists to use poetry to shore up support for their scientific arguments. These classically educated “men of science” marshalled the authority of poetry to shift conversations in their own fields. For example, physicist and mathematician James Clerk Maxwell parodied Shelley’s Prometheus Unbound in a paper he delivered at the British Association for the Advancement of Science in 1876. He wrote his own poem about “Energy” dethroning “Force” and shedding its Newtonian limitations [7].

  3. Lastly, the phenomena that the North British scientists tried to unify under the term energy were abstract. Unlike objects dimensionally difficult to access, as in tiny microbes or planets too large and far away, energy’s objects are hard to grasp because they gain no purchase without language and structure to give them form. In other words, you can’t go out and simply find heat, pick it up, bring it home, photograph it, and expect that anyone else doing the same will arrive at the same result. Electromagnetism is even more difficult. One needs language, as well as experimental rigor, to guide these concepts into form. And, because the North British crew wanted to describe the phenomena of conservation and transformation, energy seemed like a perfect word. Remember that, for Aristotle, energeia meant activity, movement, vigor, and dynamicism. It did not conjure up images of stasis.

Transformation is key. The final point I’ll make is that energy is beautifully situated towards the literary because, in both its scientific and its pre-scientific registers, it is fundamentally about transformation. Energy is about preference for movement, which primed its application in governing fuel sources. However, its metaphysical and literary connotations far predate the industrial era. When we consider that energy physics required models, language, and other representational forms to coax energy out of its elusive abstractions, it becomes clear how much the transformations of figurative language complement the energetic transformations of physical phenomena.

Figurative language – like metaphor, metonymy, allegory, and analogy – takes one thing and transforms it into another thing. Using metaphor we might say, “the moon was a face lit from within.” The moon is obviously not a face, yet we turn it into one with language, inspiring imagery that transforms both “the moon” and “a face” on their own into something novel. So, too, does energy describe transfers and relationalities in states of being. When we dive into the weeds of nineteenth-century physics, we find figurative language everywhere. The Victorians applied analogy and metaphor (i.e., the language of transformation) to define and describe energetic phenomena (i.e., the physics of transformation). Take a beat to admire the synergy.

To sum it up, energy began as a humanistic term and accumulated centuries of meaning before physicists appropriated it in the late 1840s for the new science of thermodynamics. At that point, energy’s prior classical associations layered into its new definition as the ability to do work, as well as buttressing a new unifying agenda meant to reorient physics away from Newtonian “force” and towards energy’s potential for conversion, transformation, and also conservation. All this polysemy and layering has left us with an overdetermined term that, astoundingly, manages to extend from the physical to the metaphysical while also escaping an elemental definition. That is why, I argue, deconstructing energy’s capaciousness is worthwhile: because it is not stable. This means that there is enormous potential to dismantle the governing structures of energy that we now find most violent, as in fossil fuel infrastructures. A topic for another time.


[1] Feynman, Richard. The Feynman Lectures on Physics, vol. I: Mainly Mechanics, Radiation, and Heat, Basic Books, 2011.

[2] Daggett, Carla New. The Birth of Energy: Fossil Fuels, Energy, and the Politics of Work, Duke University Press, 2019.

[3] Cohen, Marc S. “Aristotle’s Metaphysics.” The Stanford Encyclopedia of Philosophy. Edited by Edward N. Zalta. Metaphysics Research Lab, Stanford University, 2020. https://plato.stanford.edu/entries/aristotle-metaphysics/

[4] “energy, n.” OED Online. Oxford University Press, Aug. 2020.

[5] Blake, William. The Marriage of Heaven and Hell. 1790.

[6] Smith, Crosbie. The Science of Energy: A Cultural History of Energy Physics in Victorian Britain. University of Chicago Press, 1998.

[7] Clarke, Bruce. Energy Forms: Allegory and Science in the Era of Classical Thermodynamics. University of Michigan Press, 2001.