Last week, a 23-year-old amateur with no advanced mathematical training did something many mathematicians never manage in a lifetime: he solved a 60-year-old problem posed by the spectacularly prolific Hungarian mathematician Paul Erdos. Liam Price, armed chiefly with curiosity and a ChatGPT-Pro subscription, found a solution to a long-standing Erdos conjecture that had resisted being solved for decades. Experts say the method appears genuinely new.
This would be a delightful one-off story on its own: the outsider beating the insiders. But it comes only weeks after another striking moment in mathematics. In March, the AI start-up Math, Inc. announced that its systems had formally verified Maryna Viazovska’s celebrated proof of the 24-dimensional sphere-packing problem. To do so, they translated one of the great mathematical achievements of recent years into ‘Lean’ – a programming language that allows a mathematical proof to be checked by a computer.
Put the two stories together and a larger picture begins to emerge. AI helps a newcomer find a proof. AI then helps turn a masterpiece of human reasoning into something a computer can certify line by line. The obvious next step is complete formal proofs produced by AI which can then be verified by a computer. We are not there yet and caution is needed. But the direction of travel is clear enough.
Price’s success is exciting partly because it hints at the democratisation of mathematics. What once required years of specialist training may increasingly be open to anyone with persistence, imagination and the right tools. AI may widen access to one of the most forbidding of human pursuits. It may help researchers test ideas, explore examples and automate drudgery. It may spot patterns no human would notice.
Yet many mathematicians are uneasy about the discovery – perhaps because mathematics has never really been just about proofs.
An AI does not think like a mathematician. Instead of writing elegant step-by-step mathematical arguments that humans understand, it creates forests of syntax that are tough or impossible for mathematicians to follow. On the Lean theorem-prover forums, where many of these debates now unfold, mathematicians have openly discussed the tension between formalising a proof and genuinely understanding it – and whether huge AI-assisted projects risk producing proofs that are formally correct yet educationally barren.
A mathematical proof in a journal is not just a certificate of truth. At its best, it is an explanation – a compact story that shows why something is true, what ideas matter, and how the result connects to others. Euclid’s proofs are still read 2,000 years later not because we doubt the theorems, but because we value the understanding of them. If the future of mathematics is about accumulating machine-checkable certificates, something precious may be lost.
For generations, people have been taught to think of mathematics as a matter of pushing symbols around to get the right answer. At school, many children come to fear it as a string of baffling rules and procedures which, if followed correctly, somehow produce the answer. Yet ask any mathematician what the subject is really about and they speak instead of patterns, structure, symmetry, surprise and beauty. A proof is essential, but it is not the whole game.
There is also a practical problem. Once a mathematical goal has been reached by brute-force formalisation, the incentive for a human being to revisit the same result in a more elegant, illuminating way diminishes. The prestige all comes from being first. Grants move on. Students chase the next frontier. If an AI has already planted its flag on a proof, who sets out to climb the same mountain?
‘This might be the largest product recall in memory,’ wrote mathematician David Bessis in a recent blog post. ‘We sold a bogus idea of math to billions of people, and the bill is coming due.’
Why should anyone outside maths care? Because the same argument applies far beyond it. If mathematics can be hollowed out in this way, then every creative field is vulnerable to the same mistake.
Take art or literature. We often speak as though the value lies solely in the finished painting or the published novel. But that cannot be right. The value also lies in the striving: in the years of apprenticeship, the struggle to say something true, the singular life distilled into form. A machine may one day produce novels that fool critics and paintings that fool curators. Yet if we conclude from this that human creation no longer matters, we will have confused product with process in exactly the same way.
Culture is not just a warehouse of outputs. It is the accumulated record of conscious beings wrestling with reality. Science, too, is not simply a database of results but a civilisation-scale attempt to deepen understanding. When we replace human aspiration with automated abundance, we may gain quantity but lose meaning.
None of this means rejecting AI. Tools that check proofs, suggest ideas or remove drudgery should be welcomed. But they should remain servants of understanding, not substitutes for it. The question is not whether machines can produce artefacts that resemble mathematics, art or literature. The question is whether we still value the human activity from which those things arose. If we forget that distinction, the damage will not stop with mathematicians.
