A Journey Into Pure Mathematics
I spent years treating math as a toolkit for AI research. Then a conversation with old friends sent me down a rabbit hole that turned out to be one of the most rewarding intellectual journeys I've had.
- Published
- Apr 03, 2026
- Read Time
- 13 min read
- Words
- 2,755
- views
- —
- Author
- Nguyen Xuan Hoa
Activity
views/week
last 24 weeks
Activity
views/week
last 24 weeks
TL;DR
- Pure mathematics has advanced far beyond applied math — and the gap is measured in centuries, not years.
- Most branches of math are just sets equipped with different kinds of structure. Once you see this, the whole landscape snaps into focus.
- The deepest results in math often come from connecting two worlds that look completely unrelated.
- If you work in engineering or AI and think you "know math," you might be surprised how much is out there.
It's been roughly five years since I last seriously engaged with theoretical mathematics. The tools I reach for daily are Linear Algebra, Probability, Calculus, and Optimization — all in service of AI research. For a long time, math was just infrastructure to me. A means to an end.
That changed after a reunion with some old friends from high school. A few of them had gone deep into pure math research. At some point during dinner I asked, half-jokingly: "So what do you actually study?" They said math is vast, and there are fascinating open questions at every level — Topology, for instance.
I pressed: "What does Topology study?" The answer was something like: "Structures, continuity, the generalisation of space..." My engineer's brain immediately searched for a use case. It didn't find one, and I let it go.
But the seed had been planted. A while later I started mapping the landscape myself — not to master every proof, but just to understand the shape of the world these people were living in. What are the big ideas? What are the open problems? Is any of it useful for what I do?
What I found was genuinely humbling. Pure mathematics has run so far ahead of applied math that you could measure the gap in decades, sometimes centuries. Mathematicians are continuously extending and generalising a structure that, when you step back and look at it whole, is one of the most beautiful things humans have built.
The Break Points — When the Old Framework Wasn't Enough
Mathematics doesn't develop smoothly. It advances through crises — moments when the existing system can no longer explain something that clearly exists. Each crisis forced an expansion of what we even mean by "number" or "space."
- Negative numbers and : Numbers are already an abstraction. But when you have 3 apples and owe someone 5, you need something new. Negative numbers enter; the integers form.
- Rational numbers : Division doesn't always come out clean. Split 3 loaves of bread between 2 people and you need fractions: .
- Irrational numbers and : The real crisis came from the Pythagoreans themselves. The diagonal of a unit square is , and — you can't write it as any fraction. This reportedly horrified the school so deeply that Hippasus, who proved it, was (according to legend) drowned at sea. A worldview, shattered by a triangle.
- Complex numbers : The equation has no real solution. Rather than conclude the equation is meaningless, mathematicians defined and extended to . The thing dismissed as "imaginary" became the foundation of quantum mechanics and signal processing.
Each time a wall was hit, climbing over it didn't just solve the immediate problem — it created an entirely new layer of mathematical infrastructure, more powerful than the one before. And every era has had walls like these.
Today we still have unsolved ones. The Navier–Stokes equations describe viscous fluid flow:
Used constantly in aerodynamics and weather modelling — yet whether smooth solutions always exist in three dimensions remains unproven, and carries a USD prize from the Clay Mathematics Institute.
Then there's the Collatz Conjecture, which is almost offensively simple to state: take any positive integer . If even, halve it. If odd, multiply by 3 and add 1. Does the sequence always eventually reach 1?
Terence Tao — arguably the greatest living mathematician — published a 2019 paper showing that almost all positive integers eventually reach a value arbitrarily close to 1 in a probabilistic sense. A remarkable result. But a complete proof? Still out of reach. Tao has described problems like this in a way I find striking: it's not that we're lazy and haven't climbed the cliff — it's that we haven't yet invented the ladder.
Many open problems aren't just hard. They're hard because we don't yet have the mathematical language to even formulate a clean attack. The solution will require something that doesn't exist yet.
The Foundational Crisis — When the Ground Itself Shook
Before getting into how mathematics rebuilt itself, I need to talk about the moment it nearly fell apart.
A friend of mine likes to pose this as a riddle: "The set of all sets that don't contain themselves — does it contain itself?" It sounds like wordplay. It isn't. This is Russell's Paradox (1901):
Importantly, Russell wasn't attacking Cantor's set theory — he was dismantling Gottlob Frege's Grundgesetze, an ambitious project to derive all of mathematics from pure logic. Frege received Russell's letter while his second volume was at the printer. He wrote in the postscript: "A scientist can hardly meet with anything more undesirable than to have the foundation give way just as the work is finished."
The fallout led to an even deeper question: Is mathematics itself complete and consistent? Can every true statement be proved? Kurt Gödel answered this in 1931, and his answer was devastating:
Incompleteness Theorems: In any sufficiently powerful, consistent axiomatic system, there exist true statements that cannot be proved from within that system.
Not just "we haven't found the proof yet." Provably unprovable. There are mathematical truths that mathematics cannot prove about itself. This is one of the deepest philosophical results of the 20th century — it sets a hard ceiling on what any formal system can achieve.
After these shocks, mathematics was rebuilt on the foundation of Zermelo–Fraenkel set theory with the Axiom of Choice (ZFC) — roughly nine axioms that are sufficient to construct essentially all of modern mathematics. And when you look at the structure that emerges from that foundation, it's remarkably clean.
Sets With Rules — The Whole Landscape in One Table
Everything starts with sets. A bare set has no structure — it's just a collection of things. The different branches of mathematics are, essentially, what happens when you add different kinds of rules on top:
| Added structure | Branch | What it studies |
|---|---|---|
| Operations (addition, multiplication) | Abstract Algebra | Groups, Rings, Fields, Vector Spaces |
| Nearness, continuity | Topology | Convergence, connectedness |
| Distance (Metric) | Geometry, Analysis | Shape, measurement |
| Measure | Measure Theory, Probability | Size, randomness |
That last row contains something I love: probability is just measure theory with one extra rule — the total measure of the sample space must equal 1. That's Kolmogorov's axioms from 1933. It's why modern probability isn't just "counting outcomes" but is a branch of real analysis:
When you combine sets with richer combinations of structure, you get the great spaces of modern mathematics:
- Banach spaces: A vector space with a norm, and crucially, completeness — every Cauchy sequence converges. The spaces live here:
- Hilbert spaces: A Banach space with an inner product, generalising Euclidean geometry to infinite dimensions. The quantum state of a particle is a vector in a Hilbert space:
- Sobolev spaces : Functions and their weak derivatives together in . Indispensable for PDEs like Navier–Stokes. You can't seriously study fluid dynamics without them.
Analysis — The Study of Change
Analysis is, broadly, the mathematics of change: limits, derivatives, integrals, convergence. But not every set supports these ideas. To have a notion of "limit," you need at minimum a topology or a metric — something that tells you when two things are "close."
- Real analysis: Functions on . Riemann and Lebesgue integration. spaces.
- Complex analysis: Functions on . Holomorphic functions are almost unreasonably well-behaved: knowing the values on a small contour determines the values everywhere inside it (Cauchy's integral theorem). This feels like magic until you see the proof, and then it feels like deeper magic.
- Functional analysis: Here things go to another level entirely. Instead of studying numbers or points, you study spaces of functions — where each "point" is itself a function.
And to study maps between spaces, you have operators — the infinite-dimensional generalisation of matrices. The derivative operator is a linear operator on the space of smooth functions. The whole machinery of linear algebra lifts to this setting, and the results are both more powerful and more subtle.
Two Worlds and the Dream of Unification
A rough but useful cut through mathematics:
- Discrete mathematics: Integers, graphs, combinatorics. Topology doesn't matter much here; Algebra is king. Number theory, graph theory, cryptography.
- Continuous mathematics: Smooth and flowing. Analysis and geometry dominate.
What's remarkable is that mathematicians refuse to let these worlds stay separate.
Fermat's Last Theorem and the Bridge Between Worlds
Fermat's Last Theorem: there are no positive integers and integer satisfying:
Fermat scribbled in a book margin in 1637: "I have discovered a truly marvellous proof, which this margin is too narrow to contain." For 350 years, every attempt failed — through Euler, Sophie Germain, Ernst Kummer, and countless others. The proof finally arrived from Andrew Wiles in 1995. But how he got there is the real story.
Wiles didn't attack Fermat directly. He went the long way round, through the Taniyama–Shimura conjecture:
Every elliptic curve over is modular.
An elliptic curve — something like , a number-theoretic object — turns out to correspond exactly to a modular form, an object from complex analysis. These two things, from opposite ends of the discrete/continuous divide, are secretly the same object viewed from different angles.
In 1984, Gerhard Frey noticed: if Fermat were false (i.e., if a solution existed), the elliptic curve would be too bizarre to be modular. Ken Ribet then proved precisely that. So: Taniyama–Shimura Fermat.
Wiles spent seven years working in secret. He announced a proof in 1993. A gap was found. Fourteen months later, working with his former student Richard Taylor, he fixed it. In 1995, the theorem was proved.
The proof is roughly 130 pages and requires graduate-level understanding of several deep fields simultaneously. Whatever Fermat thought he had in 1637 — assuming he wasn't just being cheeky — was almost certainly not this.
The Langlands Programme
But the Fermat–Taniyama–Shimura story isn't an isolated miracle. It's one piece of something much larger: the Langlands programme — sometimes called the "grand unified theory of mathematics."
In 1967, Robert Langlands sketched a vast web of conjectured connections between: number theory (Galois theory, discrete) — harmonic analysis (automorphic forms, continuous) — representation theory (algebraic). The goal is to build a dictionary: problems from one world become solvable using tools from another.
A direct descendant of this spirit is the Green–Tao theorem (2004): the primes contain arbitrarily long arithmetic progressions. Green and Tao attacked the primes — chaotic, discrete, seemingly random — through the lens of analysis, using higher-order Fourier analysis to find hidden structure:
This reminds me of Random Matrix Theory: a collection of objects that looks like noise turns out to exhibit breathtaking structure at scale. It makes me wonder how many more hidden geometric regularities are lurking in things we currently perceive as chaotic.
Category Theory — The Mathematics of Mathematics
If you thought the above was abstract, welcome to Category Theory. Where mathematics studies objects, category theory studies the relationships between structures at the most general possible level. It doesn't care what objects are; it only cares how they relate to each other through structure-preserving maps — morphisms, functors.
The most powerful application: Algebraic Topology. Instead of comparing two topological spaces directly (very hard), you map them to algebraic groups (much more tractable). A hard geometric problem becomes a computation. This is the spirit of category theory: translate the problem into a context where you have better tools. It's the same move Wiles made with Fermat.
Physics and Mathematics: When the Debt Flows Both Ways
The cross-pollination between fields has reached a level I wouldn't have expected.
String Theory couldn't exist without differential geometry and complex Calabi–Yau manifolds. But mathematicians also routinely borrow from physics to crack problems that appear to be purely geometric.
The clearest example: the Poincaré Conjecture — one of the Millennium Prize Problems — was proved by Grigori Perelman in the early 2000s using Ricci Flow, an equation describing how a Riemannian metric "flows" over time, proposed by Richard Hamilton in 1982:
But here's the part worth understanding clearly: Hamilton had the idea and saw its potential, but got stuck when the flow produced singularities — points where the geometry "blows up." Perelman's actual breakthrough was Ricci Flow with surgery: a technique for cutting out and patching singularities so the flow could continue. That was the invention. That was what made the proof possible.
Perelman then declined both the Fields Medal and the USD prize, apparently having found the work itself to be sufficient.
What This Has to Do With Engineering
After mapping all of this, I kept asking myself the engineer's question: so what?
It turns out — more than I expected.
In digital signal processing, to design a stable filter you analyse the poles of its transfer function in the Z-plane:
Stability requires all poles to lie inside the unit circle in the complex plane. That's a direct application of the geometric structure of . If you understand the topology of that space, you can reason about stability systematically — not just trial and error.
Or take Harmonic Analysis — the deep evolution of Fourier series — which is simultaneously the weapon Tao used to see structure in the primes and the mathematical foundation of audio compression, image encoding, and digital communications. Prime distribution and signal processing share the same underlying tools. I genuinely did not expect that.
The Language Underneath Everything
After all of this, the vague answer my friends gave — "Topology studies structure" — doesn't seem vague anymore. I just didn't have the vocabulary to hear it properly. Maybe that's how mathematics works: the meaning is there, but you have to build the language before you can see it.
What's actually happening at the frontier, as far as I can tell, is that mathematics, physics, and philosophy are collaborating — sometimes without realising it — on building a universal language for describing every possible structure consistently. Not just the physical world. Any world that can be thought about rigorously.
I never expected to find this fascinating. For most of my career, the ceiling of my interest was "does this make my model train better." But the longer I spent looking at the map of pure mathematics, the more I felt something like vertigo — the good kind. The kind that comes from realising you've been standing at the base of a mountain range you didn't know existed.
I'm not going to become a pure mathematician. But I am going to keep looking up.
This is a personal account of one engineer's exploration of mathematics — not a rigorous academic reference. Corrections and pushback very welcome.