Notes on a Hegelian interpretation of Riemann’s Zeta function


This is an extended version of Hegelian contradiction and the prime numbers (part 2)

Why investigate the relationship between Hegel’s philosophy and Riemann’s mathematical analysis of the primes? Essentially, I’m “testing” the (ambitious) claims of Hegel’s Science of Logic.

Hegel claims to have discovered the necessary structure of anything that exists (“determinate being”). That structure is a dynamic unity of being (affirmation) and nothing (negation) that are in contradiction with each other. According to Hegel, this structure necessarily generates both “physical” and “mental” phenomena. So determinate being should present itself, in a ubiquitous manner, in both physical theory and mathematical logic.

In Notes on a mathematical interpretation of the opening of Hegel鈥檚 Science of Logic I developed a formal interpretation of Hegel’s determinate being. I concluded that determinate being necessarily generates harmonic (wave) phenomena, and physical theories are essentially “harmonic oscillators all the way down”. First test passed? Perhaps.

Now I turn to the realm of mental phenomena, in particular those abstract ideas that seem especially God given, immutable and perfect: the integers and the primes. Does Hegel’s determinate being appear in this realm too? Surprisingly, the answer is yes.

Part 1: Riemann’s revolution in the study of the primes

In this first part, we’ll take a whirlwind tour of the primes and some of their properties and how Riemann, in 1859, revolutionised their study.

The fundamental theorem of arithmetic

Prime numbers are integers that can only be divided by themselves or 1.

The fundamental theorem of arithmetic states that every integer can be uniquely written in terms of multiplications of prime numbers. A prime number, in contrast, cannot be further broken down into multiplications of other numbers. In this sense, they are the elementary atoms of multiplication.

So the primes are special in two ways: we can’t make them by multiplying other integers together. And all the other integers can be made by multiplying a unique combination of primes together.

Such facts might be of purely mathematical interest. But number theory, although abstract, reveals very general properties of reality.

For example, let’s say I give you 45 pebbles and ask you to arrange them in a rectangle. No problem you say, and very quickly, you assemble a 9 by 5 rectangle.

But now I hand you 2 more pebbles, and ask you to build a bigger rectangle.

No matter how long, or how hard, you try you will never make a rectangle from 47 pebbles. It’s impossible, for the simple reason that 47 is prime and so can’t be broken down into a multiple of two numbers.

The disorder of the primes

Now, imagine the infinity of the integers stretched out horizontally on a number line.

We see an infinite number of primes. We also see that the primes appear at irregular intervals, “growing like weeds” among the ordinary numbers. The spaces between the prime numbers aren’t uniform. Sometimes the gaps are small, and sometimes really big.

The first 195 integers. The primes are red.

In fact the gaps tend to get astronomically bigger as we look at higher and higher parts of the number line.

The staircase of the primes: as we count (along the x-axis) we jump up 1 unit (on the y-axis) if we encounter a prime. The gaps between successive primes are not uniform and so the staircase is irregular. Here we see 19 primes between 1 and 70.
There are only 9 primes between 500 and 570. The steps in the staircase are getting longer.
There is only 1 prime between 5,000,000 and 5,000,070.

Although the gaps tend to get bigger there are always short gaps. As of today (early 2019) mathematicians know there are infinitely many primes that differ only by 246. So as we ascend the prime staircase, to unimaginable heights, the steps get longer, but there are always short steps.

This is a very irregular kind of staircase!

The Greek filtering algorithm

The reason for the irregular staircase is, in one sense, perfectly clear and holds no mystery whatsoever. Early Greek mathematicians specified a very simple algorithm (“the sieve of Eratosthenes”) for generating the gaps between the primes.

We start at 2 and mark it red since it’s prime. We then jump 2 steps to 4, and mark it black, since 4 can can be divided by 2. And we keep jumping 2 steps, through the number line, marking all multiples of 2 as black, since they can’t be prime.

We next move to the next unmarked number, which is 3. It must be prime, since it has no divisors, and so we mark it red. We now keep jumping 3 steps and mark all these numbers black, since they can be divided by 3 and can’t be prime.

We continue this process, filtering out non-prime numbers, until we eventually draw the table of black and red numbers above.

This is a very simple algorithm, and quite easy to write as a computer program. So there’s no mystery in the irregular spacing of the primes. The simple rules that generate the irregular gaps between primes are entirely transparent.

However, from another perspective, we also see evidence of extreme regularity in the distribution of the primes. And this is when things start to get a lot less simple.

The order of the primes

Let’s construct a different staircase that hints at a clear law that governs the distribution of the primes. The Von Mangoldt function is:


This function, as we travel along the number line, creates a step in the staircase whenever we hit either (i) a prime number or (ii) any number that’s the power of a prime. So, for example, we create steps at 2, 2 squared, 2 cubed, 2 to the power 4, and so on. Similarly, we create steps at 3, 3 squared, 3 cubed, and so on.

But the height of these steps also change. They get bigger and bigger, according to the logarithm of the prime factor. The step heights at 2, 2 squared, 2 cubed, and so on, are all of size log(2), which is about 0.7. But the step heights at 3, 3 squared, 3 cubed, and so on, are all size log(3), which is about 1.1.

The new staircase, known as the Chebyshev function, is formed by stacking all these steps together:


What does it look like?

The Chebyshev function between 1 and 70. The blue line is the staircase, and the red line is a perfect straight line.

Although we’ve only looked at a tiny part of the number line it seems this new prime staircase approximates a perfect straight line. If this relationship continues to hold then the distribution of primes would follow a very simple and regular law.

The Chebyshev function between 500 and 700. We see some deviation, but overall the primes track a straight line.
The Chebyshev function between 5,000,000 and 5,002,000. The straight line law continues to approximately hold.

At this micro level, the primes and their powers are irregularly spaced. There’s disorder. But if we zoom out we see numerical evidence that they approximate a relatively simple, straight-line law. There appears to be almost perfect order.

So there is both disorder and order in the primes. The disorder is, in some sense, very easy to understand since a simple algorithm generates it. But the order is very difficult to understand, since proving this straight-line law required nothing less than a revolution in the methodology of number theory.

Riemann’s revolution

Number theory studies properties of discrete magnitudes and is as old as civilisation itself. Up to the 17th Century mathematicians employed elementary methods in their proofs that employed the basic operations of arithmetic.

But the discovery of the calculus by Newton and Leibniz started to change that. In the 19th Century mathematicians realised that methods that apply to continuous magnitudes, such as differentiation and integration, also applied to number theory, and in fact were more powerful. The modern field of analytic number theory was born.

The mathematician, Bernhard Riemann, in 1859, used the new tools of analytic number theory to look at the integers in an entirely new way.

The invention of the telescope revolutionised the science of astronomy. New machines can help us see previously hidden phenomena.

Riemann invented a new kind of mathematical machine, called the Zeta function, which reveals hidden properties of the integers.

The Zeta function is a very complicated kind of machine, and there are lots of equivalent ways of describing it. Here’s one way, which describes its behaviour over a restricted range of inputs:

A definition of Riemann’s Zeta function in the interval 0 < s < 1.

The first thing to note is that you feed the Zeta machine with complex numbers. Zeta then performs some computations and hands you back a new complex number.

Complex numbers, you will recall from school, have two parts: an ordinary real part and a so-called imaginary part, which is some multiple of the square root of 1.

The Zeta function is Riemann’s mathematical telescope. What does it do? And how can it tell us anything about the integers, or the distribution of primes?

The zeros of the Zeta function

Ordinary functions take one number as an input and return an output. So we can graph ordinary functions by pairing the inputs and outputs as coordinates in the plane.

But complex-valued functions, like Zeta, are harder to visualise. A complex number has two parts. So both the input and the output of the Zeta function are points in the complex plane. The Zeta machine therefore takes any point on a plane surface and moves it somewhere else. And it moves all the points. And that’s hard to visualise in a single diagram.

So what we’ll do instead is look at a subset of points in the plane and see where the Zeta function moves them to. Here’s our first example:

The Zeta function maps points in the plane to new points in the plane. On the left-hand-side we blue input points. On the right-hand-side we have their corresponding outputs (red line).

We can collapse these figures together to summarise the behaviour of Zeta over these specific inputs:

The Zeta function maps the set of blue complex numbers to the set of red complex numbers (in the order indicated by the arrows: so the tip of the first blue arrow maps to the tip of the first red arrow).

Now we can get to the point. Riemann discovered that Zeta maps some special input values to the origin of the complex plane.

For example, 味(0.5 + 14.1347 i) evaluates to 0. So we call this input value a “non-trivial” zero of the Zeta function.

Here’s a plot of the first 3 non-trivial zeros that Riemann computed:

Zeta maps the blue inputs to the the red output spiral. The right-hand-side zooms in on the spiral. We see that it intersects the origin on 3 occasions. So 3 blue input points must be zeros of the Zeta function. In fact, they are 0.5+14.1347 i, 0.5+21.022 i, and 0.5 + 25.0109 i.

Riemann knew there must be an infinite number of zeros. But he could only calculate a handful with pen and paper. We can easily explore more with modern computers:

The output of the Zeta function for input values 0.5 + y i where 0 < y < 200. The red spiral hits the origin 79 times.

The zeros and the primes

Now this is all very pretty, but so what?

Riemann discovered, using techniques of complex analysis and integral transforms, a remarkable fact: the location of the zeros of the Zeta function encode the distribution of the prime numbers.

For example, this remarkable fact means we can construct a formula for Chebyshev’s prime staircase in terms of the zeros of the Zeta function:

An explicit formula: the Chebyshev prime staircase equals an expression that sums over the zeros of the Zeta function (each 蟻 in the summation denotes a zero).

(How can the height of a staircase equal a function of complex numbers? The trick is that the Zeta zeros come in conjugate pairs. And therefore the imaginary part of the zeros cancel out in the right-hand-side of the explicit formula).

We can ignore the log(2蟺) constant term, since it quickly gets swamped as we ascend the number line. The first term, x, is a big reveal, since that’s exactly what we’d expect to see if the straight-line law was true!

If the straight-line law was perfectly true we’d simply have 蠄(x) = x. But we don’t: we have an extra term, which is an infinite sum of all the Zeta zeros. The Zeta zeros therefore control the fluctuations of the primes (and their powers) around the straight line.

How far does the Chebyshev staircase deviate from a straight line across the whole infinity of integers? A lot, only a little bit? Are there regions where it is very, very far away from the straight line, or is that deviation bounded? In principle, the Zeta zeros can answer these questions.

Now, if the Chebyshev staircase really does approximate a straight line, 蠄(x) ~ x, as we ascend to infinity, then the infinite sum of the Zeta zeros in the explicit formula above must eventually be “overpowered”, or dominated, by the first term, x.

And whether that happens depends on the precise placement of the infinite number of the Zeta zeros.

The big problem, however, is that, as of February 2018, mathematicians simply don’t know where all the Zeta zeros actually are. It’s just not easy to find out where they all live.

The Prime Number Theorem

Riemann knew they must lie somewhere in a critical strip, where every zero has the form x + i y, where 0 <= x <= 1.

The critical strip. The non-trivial zeros of the Zeta function all live somewhere in here (where the strip stretches up to positive infinity, and down to negative infinity).

But it wasn’t until 1896 that mathematicians proved that no zeros exist on the line x=0 or x=1 (the left and right-hand-sides of the critical strip). This knowledge alone is sufficient to prove that the distribution of primes is indeed governed by a straight-line law. The final proof is now known as the Prime Number Theorem, and was a crowning achievement of analytic number theory:

The Prime Number Theorem: the relative error between the Chebyshev prime staircase and a perfect straight line gets closer to zero as we approach infinity.

At the microscopic level, the primes and their powers are spaced very irregularly. But, if we zoom out to the macroscopic level, they approximate a simple, straight-line law. The Prime Number Theorem means this law necessarily holds all the way to infinity..

The Zeta function was the key to unlocking this hidden order of the primes.

The Riemann hypothesis

The Prime Number Theorem tells us a lot about the distribution of the primes. But if we knew exactly where all the zeta zeros lived in the critical strip then we’d know even more, and greatly advance our understanding of the multiplicative structure of integers and various generalisations.

Riemann’s hand calculations suggested the following hypothesis: all the non-trivial Zeta zeros live on the real line x=0.5, which is called the critical line.

As of 2004, Riemann’s hypothesis has been numerically confirmed up to the first 10,000,000,000,000 zeros. (Of course, this is nowhere close to infinity, the hypothesis might still fail).

But, as of February 2019, Riemann’s hypothesis remains unproved. A small army of mathematicians have tried, but the proof is elusive. In consequence, it’s the most famous unproved conjecture in the whole of mathematics.

The mystery of the Zeta function

So that’s a bird’s eye view of the Zeta function and its relationship to the distribution of the primes.  Obviously, I’ve glossed a great deal of mathematical detail. In particular, I’ve skipped the mathematical argument that relates the Zeta zeros to the Chebyshev staircase. (For those who want to explore, Marcus du Sautoy’s The Music of the Primes, John Derbyshire’s accessible but slightly more technical Prime Obsession, and Matthew Watkins’ fun, illustrated and psychedelic, three volume Secrets of Creation, are all good popular accounts of that logic.)

Prime numbers are so simple a child can understand them, yet so complex that an army of mathematicians, working for over one-hundred years, have yet to completely decipher their secrets. Quite naturally, popular accounts emphasise this mysterious quality.

But I want to consider a different, but related, mystery: Why is the Zeta function uniquely successful in encoding knowledge about the integers? Why is complex analysis more powerful than elementary techniques in number theory? Why can continuous magnitudes, imaginary numbers, differentiation and integration etc. tell us new things about the ordinary counting numbers and the properties of the simple operations of addition, subtraction, multiplication and division?

Riemann’s new way of looking at the integers is mathematically unambiguous. But what this new way of looking is, and why it should prove so effective, is more mysterious.

Even mathematicians aren鈥檛 exactly sure why the Zeta function encodes information about the distribution of the primes, only that it does.

To try to answer I’ll now adopt a highly non-traditional and explicitly philosophical approach to elucidating the meaning of the mathematics of the Zeta function.

Part 2: A Hegelian interpretation of Riemann’s Zeta function

In this second part, we’ll show how Hegel’s metaphysics can (begin to) explain why Riemann’s Zeta function reveals hidden properties of the integers.

Back to Hegel

A previous post, Notes on a mathematical interpretation of the opening of Hegel’s Science of Logic, developed a mathematical interpretation of the opening of Hegel’s Science of Logic. Hegel’s metaphysical argument aims to reveal the necessary structure of anything that exists (whether in physical reality or in the mind).

Hegel calls this necessary structure “determinate being” or “becoming”. I called the mathematical interpretation of that structure, “Hegel’s contradiction”, since it’s a dynamic unity of the opposing concepts of being and nothing:

Hegel’s contradiction describes the necessary structure of anything that exists. The above diagram is a mathematical model of Hegel’s metaphysical propositions as two coupled differential equations.

The following won’t make much sense unless you first understand how the above diagram is implied by the opening of Hegel’s Logic.

In the Science of Logic Hegel presents (what he claims is) a necessary deduction from determinate being to basic concepts such as quality, quantity, magnitude, ratios, powers etc. So Hegel attempts to deduce the basic concepts of mathematical thought (as he understood them in his time) from the Hegelian contradiction. His argument, it has to be said, is extremely difficult and obscure. I will refer to it occasionally.

But my purpose here isn’t to develop a faithful interpretation of Hegel’s text. Instead, I want to see where the following thought takes us: Assume Hegel is right, and everything is indeed ultimately composed of Hegelian contradictions. Therefore, the integers -paragons of perfect, immutable objects that are impervious to time and exhibit no apparent changes whatsoever  – must be, contrary to appearances, fundamentally dynamic objects with internal contradictions that cause change and movement. The integers must also be Hegelian contradictions.

If we really want to understand deep properties of integers, such as the distribution of the primes, then we need to understand what integers truly are. And what the integers must truly be, according to Hegel, are things that ultimately must be contradictory unities. This is what the logic of Hegel’s Logic implies.

So let’s begin this experimental line of thought by defining what a “Hegelian integer” might look like.

Hegel numbers

In my Notes on a mathematical interpretation of the opening of Hegel鈥檚 Science of Logic I neglected to two properties of a Hegelian contradiction — (i) the rate, or “speed”, at which being reacts to nothing (and vice-versa) and (ii) the overall “activity level”, or quantity of substance that flows within it. In other words, I examined the basic structure of a contradiction, but I didn’t try to individuate contradictions, and distinguish them from each other.

But now I want to define lots of different “Hegelian integers”. And the only way we can distinguish one contradiction from another is in terms of its internal reaction rate, and its overall activity level.

We’ll use the symbol 蠅 to denote the rate that being affirms nothing, and nothing negates being.

The Hegel number, H[蠅], is a contradictory unity of being and nothing where 蠅 denotes the mutual reaction rate of being and nothing.
We’ll denote the Hegelian number that corresponds to the integer 蠅=2 as H[2]. In consequence, being and nothing, in the contradiction H[2], react twice as fast to each other compared to being and nothing in the contradiction H[1].

The system of coupled differential equations, that define the contradiction, are the same as before, except we now have the new parameter, 蠅. Define the Hegel number, H[蠅], as the system of equations:

The Hegel number, H[蠅], is a 2D system of coupled differential equations. The value of 蠅 determines both the reaction “speed” of being and nothing, and the “size” of the contradiction (via an unspecified function f(位,蠅)). 
Hegelian numbers not only have reaction rates but also an activity level or “scale” or “size”. As explained earlier, being and nothing interact by affirming and negating each other, and the respective size or strength of being and nothing, within the contradiction, denoted by x(t) and y(t), oscillate over time they nonetheless obey the following conservation law:conservation-law-croppedwhere k is some arbitrary constant. This law implies that the maximum value of x(t) is k (when y(t) is 0) and the maximum value of y(t) is also k (when x(t) is 0). For simplicity, let’s call this conserved value the size of the contradiction because it directly relates to the quantity of substance flowing within it. So k is the overall activity level, or “energy” or “size” or “scale” of the contradiction.

Note that, at t=0, y(0)=0, and therefore x(0) is a maximum value. We can therefore specify the activity level of a Hegelian number by setting x(0) to some arbitrary constant k.

But how “big” should a specific H[蠅] number be? Right now, there seems to be no necessity why it should be any particular level, other than it should be determined by 蠅.

So I will postpone the decision, and introduce a degree of flexibility. So we set x(0), in the above equation system, by an unspecified function f(位,蠅), where 蠅 is the reaction rate and 位 is a universal energy scale shared by all H numbers. This way we can talk about the relative energies of H numbers without having to fix an absolute (and presumably arbitrary) scale. Note we already have a concept of universal time shared by all H numbers, which is t. So we can think of 位 as a natural counterpart.

And that’s it. We’ve now defined “Hegel numbers”. Ordinary numbers and Hegel numbers are in a simple 1 to 1 correspondence:

The number 蠅 corresponds to the Hegel number H[蠅]

Let’s take a look at some examples.

Examples: the Hegel numbers H[2] and H[5]

Every H number defines a trajectory of fluctuations of being and nothing through time. The general solution of H[蠅] is:

The fluctuations of the Hegel number, H[蠅], over time.
To plot example trajectories we need to specify energy levels. Purely for the sake of illustration, let’s define f(位,蠅)=1/蠅. So “faster” H numbers are “smaller”.

The Hegel number H[2] is then:

(i) H[2] is defined by an equation system. (ii) Being, x(t), and nothing, y(t), oscillate over time. (iii) The phase-space of H[2] is a circle.
The trajectory of H[5] is qualitatively similar, except it moves faster but with smaller fluctuations:

The trajectory of H[5]. H[5] oscillates faster than H[2] but the amplitudes are smaller. In consequence it traces out, in phase space, more of a (smaller) circle, in the same amount of time, compared to H[2].
We get a better idea of the difference between H[2] and H[5] by seeing them in action:

The phase-space of Hegel number H[2] as it fluctuates over time.
Hegel number H[5] is smaller and faster than H[2].
So I hope you’ve got a good idea of how different Hegelian integers behave.

Normal numbers, such as integers, can be added, subtracted, multiplied and divided. We can perform operations on them. What kinds of operations can we perform on Hegel numbers?

Sublating Hegel numbers

Well, there are many possible operations. Here we’ll just focus on one, which I’ll call the sublation operator.

A Hegelian number is ultimately a causal structure that describes an interaction between being (x) and nothing (y). For example, we picture H[蠅1] and H[蠅2] as:

Two H numbers, ready for synthesis into a higher unity.

We wish to construct a new causal structure from the component numbers H[蠅1] and H[蠅2]. There are many possible methods of doing this. But we want a method that is consistent with Hegel’s speculative derivation of becoming (determinate being) from being and nothing. Some principles we need to observe are:

  1. Being always “passes over into” nothing.
  2. Nothing always “passes over into” being.
  3. Being always affirms nothing (i.e., has a distinct “direction” different from nothing).
  4. Nothing always negates being (i.e., also has a “direction” that’s different from being)
  5. The higher, sublated unity always preserves its components as “moments”.
  6. But the higher unity also “puts an end to” its components and manifests new properties not reducible to them. (The whole is greater than the sum of its parts.)

Let’s translate the above principles into a mathematical operation that joins two Hegelian numbers together.

Principles (1) and (2) imply that, in the above diagram of H[蠅1] and H[蠅2], we need to add some new connections. Specifically, we connect the being H[蠅1] to the nothing of H[蠅2] (i.e. add a directed edge from x1 to y2), and we connect the nothing of H[蠅1] to the being of H[蠅2] (i.e., add a directed edge from y1 to x2).

Principle (3) implies the connection from x1 to y2 is positive, +, with a reaction rate that is some function, g, of the component rates, i.e. +g(蠅1, 蠅2). And principle (4) implies the connection from y1 to x2 is negative, -, with a symmetric reaction rate of -g(蠅1, 蠅2).

What should the function, g, actually be? There are lots of possibilities so the choice seems undetermined and arbitrary, and therefore we lose necessity. But it turns out that, in order to satisfy principles (5) and (6), the choice is severely constrained.

So I’ll just stipulate that g is the function g(蠅1, 蠅2) = 蠅2 – 蠅1. Why this choice satisfies principles (5) and (6) will become apparent in a moment.

So if we do all the above them we get a new causal structure: the sublated unity of two Hegel numbers is:

H[蠅1] 鈯 H[蠅2]: a sublation of two Hegel numbers. H[蠅1] 鈯 H[蠅2] is a new unity of being and nothing where: (i) the being of H[蠅1] “passes over into” the nothing of H[蠅2] and (ii) the nothing of H[蠅1] “passes over into” the being of H[蠅2]. The reaction rates of the two new connections are identical in magnitude, but differ in sign, and are a simple function of the component reaction rates.
(One might ask, quite naturally: why not have reciprocal connections from H[蠅2] back to H[蠅1]? There isn’t really a good reason except that we don’t want to impose that H[蠅1] 鈯 H[蠅2] necessarily produces the same causal structure as H[蠅2] 鈯 H[蠅1]. At this stage, we want to respect the order of sublation.)

More formally, H[蠅1] 鈯 H[蠅2] is the following 4D system of coupled differential equations:

H[蠅1] 鈯 H[蠅2] is a 4D system of coupled differential equations, with two dimensions of being (x1 and x2) and two dimensions of nothing (y1 and y2).
Hegel number H[蠅1] has new outputs that connect to H[蠅2], but has no inputs from H[蠅1], which reflects the order of the operation H[蠅1] 鈯 H[蠅2]. So H[蠅2] gets “attached” to H[蠅1], and therefore H[蠅1] partially controls or drives H[蠅2] (and not the other way around) in a kind of master-slave relationship (this is merely a poetic reference to one of Hegel’s concepts, rather than an interpretive claim). But, as a consequence of this asymmetrical relationship of control, the dynamic equations for H[蠅1] are entirely preserved in the sublated unity (i.e., the left-hand-side equations in the above figure are identical to the equations for an isolated Hegel number).

In contrast, Hegel number H[蠅2] in the sublated unity has new inputs from H[蠅1], and therefore its dynamic equations have new terms (when compared to H[蠅2] as an isolated system). Above, I’ve written those inputs in a slightly different, but equivalent, form. In the sublated unity, the being of H[蠅2] changes with respect to (i) both the nothing of H[蠅2] and H[蠅1], but this necessarily implies that it also changes with (ii) the change in being of H[蠅1] (as per the x1′(t) term on the right-hand-side of the equation for x2′(t) above). This is a kind of second-order effect within the sublation.

Similarly, the nothing of H[蠅2] changes with respect to (i) both the being of H[蠅2] and H[蠅1], but this necessarily implies that it also changes with (ii) the change in nothing of H[蠅1] (as per the y1′(t) term on the right-hand-side of the equation for y2′(t) above).

In consequence, the dynamic “laws of motion” of the being and nothing of H[蠅2], in the sublated unity, recursively refer to the “laws of motion” of H[蠅1].

So how does H[蠅1] 鈯 H[蠅2] behave over time? The solution to the 4D system is:


The dynamics of H[蠅1] 鈯 H[蠅2] have a surprisingly simple form. H[蠅1], in the unity, behaves just like an isolated H[蠅1]. And H[蠅2], in the unity, behaves as a straightforward addition of the dynamics of each isolated H[蠅2] and H[蠅1]. For example, x2(t) consists of two terms:


The first term is simply the dynamics of the being of an isolated H[蠅2], and the second terms is simply the dynamics of the being of an isolated H[蠅1]. The same additive structure applies to y2(t).

Clearly, we can analyse the behaviour of any part of this complex unity. But here we’ll focus on the resultant behaviour, which is the fluctuations of being and nothing of the final Hegel number in the sublation. So, in this case, the resultant is the trajectory of x2 and y2.

To plot the resultant dynamics we must again stipulate an energy scale. Again, we’ll choose f(位,蠅)=1/蠅, so “faster” Hegel numbers have smaller “scale”. Here’s the sublation of Hegel numbers H[2] and H[5]:

The resultant trajectory of H[2] 鈯 H[5] is a linear superposition of the dynamics of H[2] and H[5].
Isolated Hegel numbers traverse perfect circles in being/nothing space. But their sublated unity is more complex: here the trajectory is an interesting, repeated pattern.

Hegel claims, in his Logic, that sublation is 鈥榦ne of the most important notions in philosophy鈥. A sublation both preserves or maintains its components and “puts an end to them”. Clearly the sublation operator introduces new properties we’ve not seen before. But in what sense does it preserve its components?

The preservation of components is obvious once we decompose the trajectory of H[2] 鈯 H[5]:

The trajectory of H[2] 鈯 H[5] decomposed into a blue component (the dynamics of H[2]) added to an orange component (the dynamics of H[5]). Each component acts like an isolated Hegel number, and traverses a perfect circle at different rates (H[5] “rotates” faster in phase-space compared to H[1]). The resultant behaviour of the sublated unity is the vector addition of the component trajectories.
So the sublation operator, by causally relating the being and nothing of the component contradictions, both preserves the component contradictions, and yet also produces a qualitatively new “ceaseless unrest” (more complex fluctuations) and “quiescent result” (a bounded, repeated trajectory in phase-space).

A word of warning about the animated phase-space visualisations. Don’t confuse the map with the territory. The sublation H[2] 鈯 H[5] is not moving in space, and its components are not rotating. This is just a useful picture to help us understand the dynamics of being and nothing:


The above sublation does not really exist in space, and doesn’t move within it. Rather, at any time, this structure has 2 activity levels of being (x1 and x2) and 2 activity levels of nothing (y1 and y2). Over time, being and nothing interact, and those activity levels fluctuate. (You may like to think of “lights” at the nodes that wax and wane).

Higher order sublations

Why stop here? We can sublate a sublation; in other words, apply the sublation operator, 鈯, as many times as we want — to any combination of Hegel numbers.

For example we can sublate H[蠅1] 鈯 H[蠅2] with a third Hegel number, H[蠅3], to get the higher-order unity, H[蠅1] 鈯 H[蠅2] 鈯 H[蠅3]. Each time, we apply the same principles, and therefore “attach” H[蠅3] to H[蠅1] 鈯 H[蠅2] by adding new connections: (i) the being of H[蠅1] and H[蠅2] become inputs to the nothing of H[蠅3], and (ii) the nothing of H[蠅1] and H[蠅2] become inputs to the being of H[蠅3]. As before, principles (5) and (6) determine the reaction rates of these new connections.

And we can keep doing this, building more and more complex unities of being and nothing.

Here are the next four, higher-order sublations. As we can see, the causal structure rapidly becomes complex:

The 3rd, 4th, 5th and 6th-order sublations of Hegel numbers.

The system of recursive ordinary differential equations that define a sublation of arbitrary order are:


Despite the network complexity the resultant trajectories conform to a simple pattern. For example, the 6th-order sublation, depicted above, defines a 12-D system of coupled differential equations. The resultant behaviour is, however, a linear superposition of its components:

The resultant solution of the 6th-order sublation of Hegel numbers. Both being and nothing are linear combinations of sines or cosines, where each term represents the trajectory of the component Hegel numbers (if they acted in isolation).

Although the form is relatively simple, the resultant fluctuations of being and nothing get increasingly complex. Here’s a plot of H[1] 鈯 H[2] 鈯 H[3] 鈯 H[4] 鈯 H[5] 鈯 H[6]:

H[1] 鈯 H[2] 鈯 H[3] 鈯 H[4] 鈯 H[5] 鈯 H[6]
We could investigate further properties of the sublation operator. And we could consider other kinds of operations on Hegel numbers, which are also relevant to our story. But, for the sake of brevity, we’ll move on.

The totality of Hegelian integers

In general an nth-order sublation defines a linear dynamic system that generates resultant behaviour that is a linear combination of the n component Hegel numbers. For example, the sublation of the first n Hegelian integers yields the resultant behaviour:

The resultant fluctuations of being and nothing generated by the sublation of the first n Hegelian integers.

But why stop at a finite number of sublations? We can sublate every possible Hegelian integer into an infinite-dimensional dynamic system.

Traditionally, we view the totality of the integers as the infinite set: 鈩 = {1, 2, 3, …}. Each integer in this set is a static quantity that relates to other members via arithmetic operations (e.g., 2 = 1 + 1). The relations are “external” in the sense we are the active agents that apply the operators and instantiate these relations (e.g., 1+3=4, 12/3=4 etc.).

The sublated totality of the Hegelian integers is different. Yes, we apply the sublation operator. But once applied, each Hegelian integer, within the totality, is a dynamic structure that relates to other members via causal relations. The whole sublation moves. In this sense, the relations are “internal” and the contradictions manipulate each other, as active agents.

The infinite, sublated totality of the Hegelian integers is an 鈭-dimensional dynamic system. The resultant fluctuations of being and nothing is the limit of an infinite sum of sine and cosine waves.

Hegel’s Science of Logic claims that anything that exists must be a dynamic contradiction of being and nothing. So, according to Hegel, the integers may appear to be static quantities with external relations, but really they must be dynamic contradictions with internal relations. We’ve translated Hegel’s metaphysical statements into a mathematical model. But are there any advantages of thinking of the infinite set of integers as H[鈩昡 rather than 鈩? Is Hegel’s metaphysical viewpoint fruitful? Again, is Hegel’s “logic” a logic worth having?

What we’ll now see is that Riemann’s revolutionary new way of seeing, embodied in his Zeta function, is precisely this Hegelian viewpoint. Of course, neither Riemann, nor any modern mathematician, adopts a Hegelian interpretation of the mathematics of the Zeta function. Nonetheless, the Zeta function is a machine for exploring the dynamic behaviour of the infinite, sublated totality of the Hegelian integers, H[鈩昡. The Zeta function is full of being, nothing and becoming; and therefore full of contradiction and movement.

In the next few sections, I’ll demonstrate this connection between H[鈩昡 and Zeta, and then speculate on why Riemann’s new way of seeing the integers turned out to be so successful.

Back to Riemann: from H[鈩昡 to the Zeta function

H[鈩昡 is an infinite-dimensional dynamic system that represents the totality of integers as sublated Hegelian contradictions. The Hegelian integers interact with each other and change over time. In contrast, Riemann’s Zeta function, 味(s), is a static, timeless map from complex number inputs to complex number outputs. What have these things got to do with each other?

The first step in relating H[鈩昡 and 味(s) is to define a map from the activity levels of being and nothing to points in the complex plane. We map Hegel’s being to the real number line, and map Hegel’s nothing to the imaginary number line. So we represent an activity level of being and nothing, at a specific time, by a complex number:

The map from being/nothing in 2D phase-space to the complex plane. Being is mapped to the real axis, and nothing is mapped to the imaginary axis.

In H[鈩昡 we have two free parameters: a universal scale, 位, that controls the total substance in the entire sublation, and a universal time, t, that controls the evolution of the entire sublation. In the next step, we represent time and energy as another complex number, s = 位+i t, where the energy level is the real part of s and the time is the imaginary part of s.

Now we form a complex-valued function, f(s), that (i) takes as input the complex number, s = 位+i t, which represents a specific energy level and time, and (ii) outputs a complex number, x(位,t) + i y(位,t), which represents the resultant activity levels of being and nothing in H[鈩昡:

f(s) calculates the resultant level of being and nothing for a given energy level and time.

In other words, the complex-valued function f(s) “embeds” the dynamics of H[鈩昡 for all possible energy levels and all possible times.

Earlier, we viewed a complex-valued function as mapping points in the complex plane to points in the complex plane. This was a very “syntactic” or mechanical viewpoint of the function. Hegel’s logic makes us think of this mapping in a new, more “semantic” or meaningful way: we can think of the complex-valued function as representing a sublated unity of Hegelian contradictions that maps a particular point in (energy, time)-space to a a particular point in (being, nothing)-space. If nothing else, this is certainly a more poetic point-of-view.

A Hegelian interpretation of the complex-valued function, f(s), which represents a sublated unity of Hegelian contradictions. The real input sets the energy level of the sublation, and the imaginary input sets the time. The output of the function is the resultant state of the unity at this time and energy level, where the real output is the quantity of being, and the imaginary output is the quantity of nothing.

However, we need to consider some additional mathematical technicalities to properly embed H[鈩昡 in the complex plane.

The Cauchy-Riemann constraint

To ensure that f(s) is truly a function of a single complex variable, s = x + i y, we need to ensure that the Cauchy-Riemann equations are satisfied (they are the partial differential equations labelled A and B below).

Recall that Hegelian integers have an arbitrary scale function f(位,蠅) that relates the “speed” of the contradiction, 蠅, to the amplitude of the contradiction (via the universal scaling constant, 位). In our previous examples, we arbitrarily chose a function f (e.g., f(位,蠅)=1/蠅 such that faster contradictions had smaller amplitudes). However, the Cauchy-Riemann equations further constrain our choice of f(位,蠅):

The Cauchy-Riemann equations imply that the energy function f(位,蠅) must satisfy a specific partial differential equation (1) and therefore take a particular form.

It turns out that, in order for f(s) to be a function of a single complex variable, then the amplitude of the Hegelian contradictions must exponentially decrease with their frequency of oscillation. However, the Cauchy-Riemann equations only partially determine f(位,蠅) up to an arbitrary function, g(蠅). So we still have a degree of freedom in our choice of f(位,蠅). However, another mathematical technicality completely determines f(位,蠅).

Avoiding bad infinities

The result fluctuations of being and nothing in H[鈩昡 are the limits of infinite sums. In general, infinite sums can explode and therefore fail to converge to a finite value. Clearly, the simple sum of all the integers 1 + 2 + 3 + … explodes to infinity. Also, the sum of the reciprocals of the integers 1 + 1/2 + 1/3 + … also explodes to infinity. In contrast, the alternating sum of the reciprocals of the integers does converge to a finite value, i.e. 1 – 1/2 + 1/3 – 1/4 + … = Log(2).

The complex-valued function, f(s), which embeds H[鈩昡 in the complex plane, must also converge to finite (complex) values. We know that every Hegelian contradiction is a conservative, bounded system. But it doesn’t automatically follow that the sublation of an infinity of such contradictions is also bounded. We need to avoid what Hegel would call “bad infinities”.

One of Riemann’s mathematical achievements was to construct a Zeta function that converges to finite values almost everywhere. He did this by generalising a real-valued function, famously analysed by Euler, using the technique of analytic continuation. Riemann’s Zeta therefore avoids bad infinities and generates finite outputs for all possible finite inputs (except for a pole at s=1). Here, we’ll avoid these technicalities by merely requiring that f(s) produce finite outputs for the restricted set of inputs that we really care about (specifically inputs in the critical strip mentioned above).

Even this weaker finiteness requirement imposes a strong restriction on the choice of the function f(位,蠅). To ensure that the infinite sum of contradictions converges to finite values then it turns out that f(位,蠅) must alternate in sign:

The Cauchy-Riemann constraints (that f is a function of a single complex variable) and the finiteness constraint (that f is a convergent infinite sum) completely determines our choice of scaling function f(位,蠅).

We’ve nearly completed our journey from the world of dynamic and coupled Hegelian numbers to the world of the static Zeta function. There is one last step to take, however.

An infinite sublation of the logarithm of the Hegelian integers

The last step is to take the logarithm of the Hegelian integers. So instead of working with

H[鈩昡 = H[1] 鈯 H[2] 鈯 H[3] 鈯 …

we will work with

H[Log 鈩昡 = H[log 1] 鈯 H[log 2] 鈯 H[log 3] 鈯 …


The simple answer is that this transformation allows us to make direct contact with the Zeta function. The slightly more complex answer is that we want to investigate the multiplicative structure of the integers, and logarithms make that easier. Recall that, by the Fundamental Theorem of Arithmetic, we can write every integer as a unique product of prime powers. For example, 144 is the product of powers of the primes 2 and 3:

144 = 2鈦 脳 3虏

So 144 is a nonlinear combination of the primes 2 and 3. But taking logs transforms multiplication into addition:

log(144) = 4 log(2) + 2 log(3)

and therefore log(144) is a linear combination of the (logged) primes log(2) and log(3). Linear relationships are easier to analyse.

Let’s now take this final step. Once we do this, we can completely convert from the dynamic Hegelian world to the static complex-valued function world. We find, rather remarkably, that the Riemann Zeta function, and the infinite sublation of the logarithm of the Hegelian integers, are the same object:

The infinite sublation of the logarithm of Hegelian integers, H[log(鈩)], is encoded by the Riemann Zeta function.
Let’s restate the final conclusion above a little more neatly (and with a slight abuse of notation):

The relationship between H[log(鈩)] (an infinite sublation of Hegelian numbers) and Dirichlet’s eta function, 畏(s), and Riemann’s Zeta function, 味(s).
You’ll notice that H[log(鈩)] isn’t exactly the Zeta function. There’s an additional (simple) term. So, more precisely, H[log(鈩)] is the Dirichlet eta function (sometimes called the “alternating Zeta function”), which, in the critical strip, has exactly the same zeros as the Zeta function. For our purposes, the differences between these two functions isn’t very important.

So what does this relationship actually mean? Essentially, Riemann’s Zeta function models the integers as a sublated totality of Hegelian contradictions. Let’s look a little deeper.

From time and scale to being and nothing

Previously, we plotted the first 3 zeros of the Zeta function. Below I plot the first 3 zeros of the Eta function. They are identical.

The Eta function maps the blue inputs to the red output spiral. The right-hand-side zooms in on the spiral, which intersects the origin on 3 occasions. In consequence, 3 points on the blue line are zeros of the Eta (and therefore Zeta) function. In fact, they are, just as before, 0.5+14.1347 i, 0.5+21.022 i, and 0.5 + 25.0109 i.

But now that we know the Eta function is actually a sublation of Hegelian contradictions we can give the following “metaphysical” interpretation of what this complex-valued function is doing:

Complex-valued inputs 畏(位 + i t):
The real value, , is the scale of the sublation.
The imaginary value, t, is the time.
Complex-valued outputs x + i y:
The real value, x, is the resultant quantity of being (at this scale and time).
The imaginary value, y, is the resultant quantity of nothing (at this scale and time).

The input domain is time and scale, and the output domain is a state of becoming (of an infinite sublation of contradictions) at that specific time and energy level.

So when we traverse the blue vertical line in input-space (where the real input is fixed at 0.5, and the imaginary input ranges from 13 to 26) and then examine the output of the Eta function what we are also doing is (i) fixing a scale for the Hegelian sublation and then (ii) watching its dynamic evolution from time t=13 to time t=26. We’re watching the evolution of a dynamic system.

Here’s an animation of H[log(鈩)] generating the first zero of the Eta (and therefore Zeta) function:

The Eta function as a sublation of Hegelian contradictions. Each coloured arrow is a component contradiction. As time advances the contradictions interact, tracing out resultant fluctuations of being (x-axis) and nothing (y-axis). At time t~14.1 we see the first zero, where both total being and total nothing are zero. (N.B. Here we visualise only the first 10 of the infinite number of contradictions. And note that each contradiction gets a little smaller but moves a little faster).

(For more zeros, see this YouTube animation of first 100 zeros of eta function).

In summary, the Zeta function encodes an infinite dimensional dynamic system at all times and all energy scales. This dynamic system consists of interacting Hegelian numbers, which are the log of the integers. The Zeta function is an infinite sublation of Hegelian contradictions.

Part 3: The metaphysics of Riemann’s revolution

OK, let’s return to the primes.

Riemann’s mathematical genius allowed him to relate the zeros of the Zeta to the distribution of the primes. This connection manifests as an explicit formula for the Chebyshev staircase in terms of the Zeta zeros (which we discussed in Part 1).

Mathematically this connection is very clear although obscured by the technical apparatus of analytic number theory. Roughly, Riemann relates the Zeta function to the prime numbers via Euler’s product formula (and this relationship is really an expression of the Fundamental Theorem of Arithmetic). We can manipulate this relationship to reveal that the Zeta function not only relates to the primes but actually encodes the distribution of the primes (and their powers). We then rewrite the Zeta function in terms of its zeros, and thereby express the distribution of the primes directly in terms of the zeros. The upshot is an explicit formula for the Chebyshev staircase, where the infinite sum of Zeta zeros “conspire” to control the fluctuation of individual primes (and their powers) around a straight-line law.

As of today, we don’t know where all the zeros really live. But we do know roughly where they must be. And, as mentioned already, this information alone is sufficient to prove powerful results such as The Prime Number Theorem.

I’ve glossed over a huge quantity of technical material, but this is the strictly mathematical story.

What are the Zeta zeros? A mathematical answer

So what are the zeros of the Zeta function? Mathematically they “encode” the distribution of the primes. More specifically, a system composed of infinite oscillatory waves, where the oscillation frequency of each wave is the imaginary part of a Zeta zero, will resonate (that is have maximum amplitude) at the primes and their powers. As Marcus du Sautoy so eloquently expressed, the Zeta zeros are the underlying music of the primes.

So the Zeta zeros are an infinite set of frequencies that together control the distribution of the primes. (For a technical overview of this point-of-view see Prime Numbers and the Riemann Hypothesis by Mazur and Stein, which is a slightly more technical but nonetheless accessible exposition.)

So we can’t directly relate an individual Zeta zero to a prime or its power. It doesn’t work that way. Instead, all the zeros collaborate in “generating” the primes and their powers. As Hegel — or Jos茅 Mourinho might say — the truth is in the whole.

Elementary methods of number theory, which remain in the world of integers and the simple arithmetic operations of addition, multiplication etc., struggled to decode the distribution of the primes. Riemann helped us to understand more of the structure of the primes by viewing the primes as being generated by a much more complex object — the Zeta function.

What are the Zeta zeros? A Hegelian answer

Hegel鈥檚 metaphysical bedrock is pure being and pure nothing. Pure being, as we saw previously, explodes to infinity, and pure nothing implodes to nothing. These pure states can鈥檛 exist since they鈥檙e unstable. Hence, we have becoming, their sublated unity, which exhibits both order and disorder.

Hegel, in his Logic, continues, and claims that becoming must individuate into separate things, which relate to each other in sublated unities of higher and higher complexity.

This universal process finally culminates in a state of absolute knowledge, which overcomes the original contradiction between being and nothing, and where God finally comes to fully know itself. So in Hegel鈥檚 philosophy there is some kind of limit, or end-point of final reconciliation.

Perhaps surprisingly, the mathematics of the Zeta function has a similar structure.

Mathematically, as we sublate Hegelian integers, they become increasingly causally entwined, and we create higher and higher complexity. The Zeta function encodes the infinite limit of this process.

The Zeta function exhibits order and disorder. In fact, the fluctuations of being and nothing are chaotic in the strictly mathematical sense. The disorder of the infinite sublation is more disorderly than any single component.

But order emerges from this chaos. It appears that the Zeta function generates trajectories that forever fluctuate about a special, zero state.

The zero state is very special indeed.

In the Hegelian interpretation, a Zeta zero is a moment when both being and nothing are identically zero. Or, if we apply the reciprocal map from previously, a moment when they are identically infinite. So either the final lights in the infinite sublation blaze bright, or they鈥檝e blinked out of existence.

This means that:

The zeros of the Zeta function are moments in time when becoming, which is an infinity of contradictions, attains a state of pure being or pure nothing.

An individual contradiction can never do this. So the order manifested by the infinite sublation is more orderly than any single component. But these pure states of perfect order are achieved by infinite chaos. So, once again, they are unstable and therefore transitory, and now merely moments of an infinitely complex process of becoming.

Riemann, in his remarkable paper, demonstrated that the zeros encode the distribution of the prime numbers. The primes are irreducible atoms of the number system, they are the mathematical bedrock.

Hegel鈥檚 logic implies that these zeros are moments when becoming reduces to pure being or pure nothing. So the zeros represent the irreducible atoms of Hegel鈥檚 Science of Logic. They are a metaphysical bedrock.

This means that:

The mathematical irreducibility of the primes is a manifestation of the metaphysical irreducibility of pure being and pure nothing.

Conclusion: The metaphysics of Riemann’s revolution

I think it鈥檚 pretty clear, at this stage, that we have more questions than answers. But we can make some general remarks.

Riemann moved number theory into the complex plane. This revealed entirely new phenomena, which have yet to be fully understood.

The success of Riemann鈥檚 project is strong evidence that the whole numbers 鈥 which we think of as static, unchanging quantities 鈥 are really some kind of shadow or projection of the Hegelian integers. The Zeta function reveals more because it represents whole numbers as what they actually are, that is dynamic contradictions of being and nothing.

But, in addition, the Zeta function represents the whole numbers as a sublated unity, where the entities internally relate via the exchange of a conserved substance. And this whole moves and changes with time. This is quite unlike the vision offered by set theory.

In the 1970s physicists noticed that the distribution of the Zeta zeros follow the same statistical law as the distribution of energy levels of systems of subatomic particles (see Hilbert鈥揚贸lya conjecture). For many, this connection was surprising and even shocking 鈥 for there seems to be no reason why fundamental physics and number theory should be intimately connected.

But Hegel would expect to see such connections, for the simple reason that he believed thought and being are identical, and conform to the same underlying laws, laws which he attempted to elucidate in his Science of Logic.

Of course, Hegel鈥檚 Logic did not invent analytic number theory or fundamental theories of physics. Rather, Hegel鈥檚 logic implies that harmonic phenomena are a necessary consequence of the fundamental ontological contradiction between being and nothing.

The reason harmonic analysis exists in mathematics and physics is because the phenomena demands it. Now, why does the phenomena demand this? According to Hegel because anything that exists (whether in reality or in the mind) must be a dynamic contradiction of being and nothing. The appearance of harmonics in physics and number theory, in the most fundamental structures of physical reality, and the most fundamental structures of Platonic thought, is a remarkable, and thoroughly comprehensive clue that Hegel’s logic is not only a logic worth having, but a logic worth developing.




1 Comment


    “Of course, neither Riemann, nor any modern mathematician, adopts a Hegelian interpretation of their mathematics.鈥

    As Lenin would say, 鈥淭here is such a party!鈥
    袝褋褌褜 褌邪泻邪褟 锌邪褉褌懈褟!

    Two starting links from which far more can be found:

    See especially:

    鈥淯nity and identity of opposites in calculus and physics鈥, Lawvere, F.W. Appl Categor Struct (1996) 4: 167. (available Sci-Hub via DOI).

    [more at link]

    Liked by 1 person

Leave a Reply

Fill in your details below or click an icon to log in: Logo

You are commenting using your account. Log Out /  Change )

Twitter picture

You are commenting using your Twitter account. Log Out /  Change )

Facebook photo

You are commenting using your Facebook account. Log Out /  Change )

Connecting to %s