I'm a former research mathematician who worked for a little while in AI research, and this article matched up very well with my own experience with this particular cultural divide. Since I've spent a lot more time in the math world than the AI world, it's very natural for me to see this divide from the mathematicians' perspective, and I definitely agree that a lot of the people I've talked to on the other side of this divide don't seem to quite get what it is that mathematicians want from math: that the primary aim isn't really to find out whether a result is true but why it's true.
To be honest, it's hard for me not to get kind of emotional about this. Obviously I don't know what's going to happen, but I can imagine a future where some future model is better at proving theorems than any human mathematician, like the situation, say, chess has been in for some time now. In that future, I would still care a lot about learning why theorems are true --- the process of answering those questions is one of the things I find the most beautiful and fulfilling in the world --- and it makes me really sad to hear people talk about math being "solved", as though all we're doing is checking theorems off of a to-do list. I often find the conversation pretty demoralizing, especially because I think a lot of the people I have it with would probably really enjoy the thing mathematics actually is much more than the thing they seem to think it is.
I’ve worked in tech my entire adult life and boy do I feel this deep in my soul. I have slowly withdrawn from the higher-level tech designs and decision making. I usually disagree with all of it. Useless pursuits made only for resume fodder. Tech decisions made based on the bonus the CTO gets from the vendors (Superbowl tickets anyone?) not based on the suitability of the tech.
But absolutely worst of all is the arrogance. The hubris. The thinking that because some human somewhere has figured a thing out that its then just implicitly known by these types. The casual disregard for their fellow humans. The lack of true care for anything and anyone they touch.
Move fast and break things!! Even when its the society you live in.
That arrogance and/or hubris is just another type of stupidity.
> Move fast and break things!! Even when its the society you live in.
This is the part I don't get honestly
Are people just very shortsighted and don't see how these changes are potentially going to cause upheaval?
Do they think the upheaval is simply going to be worth it?
Do they think they will simply be wealthy enough that it won't affect them much, they will be insulated from it?
Do they just never think about consequences at all?
I am trying not to be extremely negative about all of this, but the speed of which things are moving makes me think we'll hit the cliff before even realizing it is in front of us
> Do they think they will simply be wealthy enough that it won't affect them much, they will be insulated from it?
Yes, partly that. Mostly they only care about their rank. Many people would burn down the country if it meant they could be king of the ashes. Even purely self-interested people should welcome a better society for all, because a rising tide lifts all boats. But not only are they selfish, they're also very stupid, at least in this aspect. They can't see the world as anything but zero sum, and themselves as either winning or losing, so they must win at all costs. And those costs are huge.
Reminds me of the Paradise Lost quote, "Better to rule in Hell, than serve in Heaven", such an insightful book for understanding a certain type of person from Milton. Beautiful imagery throughout too, highly recommend.
> Do they just never think about consequences at all?
Yes, I think this is it. Frequently using social media and being “online” leads to less critical thought, less thinking overall, smaller window that you allow yourself to think in, thoughts that are merely sound bites not fully fleshed out thoughts, and so on. Ones thoughts can easily become a milieu of memes and falsehoods. A person whose mind is in the state will do whatever anyone suggests for that next dopamine hit!
I am guilty of it all myself which is how I can make this claim. I too fear for humanity’s future.
> Are people just very shortsighted and don't see how these changes are potentially going to cause upheaval?
> Do they think the upheaval is simply going to be worth it?
All technology causes upheaval. We've benefited from many of these upheavals to the point where it's impossible for most to imagine a world without the proliferation of movable type, the internal combustion engine, the computer, or the internet. All of your criticisms could have easily been made word for word by the Catholic Church during the medieval era. The "society" of today is no more of a sacred cow than its antecedent incarnations were half a millenium ago. As history has shown, it must either adapt, disperse, or die.
> The "society" of today is no more of a sacred cow than its antecedent incarnations were half a millenium ago. As history has shown, it must either adapt, disperse, or die
I am not concerned about some kind of "sacred cow" that I want to preserve
I am concerned about a future where those with power no longer need 90% of the population so they deploy autonomous weaponry that grinds most of the population into fertilizer
And I'm concerned there are a bunch of short sighted idiots gleefully building autonomous weaponry for them, thinking they will either be spared from mulching, or be the mulchers
Edit: The thing about appealing to history is that it also shows that when upper classes get too powerful they start to lose touch with everyone else, and this often leads to turmoil that affects the common folk most
As one of the common folk, I'm pretty against that
There exists in such a case a certain institution or law; let us say, for the sake of simplicity, a fence or gate erected across a road. The more modern type of reformer goes gaily up to it and says, “I don’t see the use of this; let us clear it away.” To which the more intelligent type of reformer will do well to answer: “If you don’t see the use of it, I certainly won’t let you clear it away. Go away and think. Then, when you can come back and tell me that you do see the use of it, I may allow you to destroy it.”
It's basically meatspace internalizing and adopting the paperclip problem as a "good thing" to pursue, screw externalities and consequences.
And, lo-and-behold, my read for why it gets downvoted here is that a lot of folks on HN ascribe to this mentality, as it is part of the HN ethos to optimize , often pathologically.
Humans like to solve problems and be at the top of the heap. Such is life, survival of the fittest after all. AI is a problem to solve, whoever gets to AGI first will be at the top of the heap. It's a hard drive to turn off.
> But absolutely worst of all is the arrogance. The hubris. The thinking that because some human somewhere has figured a thing out that its then just implicitly known by these types.
I worked in an organization afflicted by this and still have friends there. In the case of that organization, it was caused by an exaggerated glorification of management over ICs. Managers truly did act according to the belief, and show every evidence of sincerely believing in it, that their understanding of every problem was superior to the sum of the knowledge and intelligence of every engineer under them in the org chart, not because they respected their engineers and worked to collect and understand information from them, but because managers are a higher form of humanity than ICs, and org chart hierarchy reflects natural superiority. Every conversation had to be couched in terms that didn't contradict those assumptions, so the culture had an extremely high tolerance for hand-waving and BS. Naturally this created cover for all kinds of selfish decisions based on politics, bonuses, and vendor perks. I'm very glad I got out of there.
I wouldn't paint all of tech with the same brush, though. There are many companies that are better, much better. Not because they serve higher ideals, but because they can't afford to get so detached from reality, because they'd fail if they didn't respect technical considerations and respect their ICs.
Interestingly, the main article mentions Bill Thurston's paper "On Proof and Progress in Mathematics" (https://www.math.toronto.edu/mccann/199/thurston.pdf), but doesn't mention a quote from that paper that captures the essence of what you wrote:
> "The rapid advance of computers has helped dramatize this point, because computers and people are very different. For instance, when Appel and Haken completed a proof of the 4-color map theorem using a massive automatic computation, it evoked much controversy. I interpret the controversy as having little to do with doubt people had as to the veracity of the theorem or the correctness of the proof. Rather, it reflected a continuing desire for human understanding of a proof, in addition to knowledge that the theorem is true."
Incidentally, I've also a similar problem when reviewing HCI and computer systems papers. Ok sure, this deep learning neural net worked better, but what did we as a community actually learn that others can build on?
The Four Color Theorem is a great example! I think this story is often misrepresented as one where mathematicians didn't believe the computer-aided proof. Thurston gets the story right: I think basically everyone in the field took it as resolving the truth of the Four Color Theorem --- although I don't think this was really in serious doubt --- but in an incredibly unsatisfying way. They wanted to know what underlying pattern in planar graphs forces them all to be 4-colorable, and "well, we reduced the question to these tens of thousands of possible counterexamples and they all turned out to be 4-colorable" leaves a lot to be desired as an answer to that question. (This is especially true because the Five Color Theorem does have a very beautiful proof. I reach at a math enrichment program for high schoolers on weekends, and the result was simple enough that we could get all the way through it in class.)
I'm not a mathematician so please feel free to correct me...but wouldn't there still be an opportunity for humans to try to understand why a proof solved by a machine is true? Or are you afraid that the culture of mathematics will shift towards being impatient about this sorts of questions?
Well, it depends on exactly what future you were imagining. In a world where the model just spits out a totally impenetrable but formally verifiable Lean proof, then yes, absolutely, there's a lot for human mathematicians to do. But I don't see any particular reason things would have to stop there: why couldn't some model also spit out nice, beautiful explanations of why the result is true? We're certainly not there yet, but if we do get there, human mathematicians might not really be producing much of anything. What reason would there be to keep employing them all?
Like I said, I don't have any idea what's going to happen. The thing that makes me sad about these conversations is that the people I talk to sometimes don't seem to have any appreciation for the thing they say they want to dismantle. It might even be better for humanity on the whole to arrive in this future; I'm not arguing that one way or the other! Just that I think there's a chance it would involve losing something I really love, and that makes me sad.
I don’t think the advent of superintelligence will lead to increased leisure time and increased well-being / easier lives. However, if it did I wouldn’t mind redundantly learning the mathematics with the help of the AI. It’s intrinsically interesting and ultimately I don’t care to impress anybody, except to the extent it’s necessary to be employable.
I would love that too. In fact, I already spend a good amount of my free time redundantly learning the mathematics that was produced by humans, and I have fun doing it. The thing that makes me sad to imagine --- and again, this is not a prediction --- is the loss of the community of human mathematicians that we have right now.
That is kind of hard to do. Human reasoning and computer reasoning is very different, enough so that we can't really grasp it. Take chess, for example. Humans tend to reason in terms of positions and tactics, but computers just brute force it (I'm ignoring stuff like Alpha Zero because computers were way better than us even before that). There isn't much to learn there, so GMs just memorize the computer moves for so and so situation and then go back to their past heuristics after n moves
As Feynman once said [0]: "Physics is like sex. Sure, it may give some practical results, but that's not why we do it." I don't think it's any different for mathematics, programming, a lot of engineering, etc.
I can see a day might come when we (research mathematicians, math professors, etc) might not exist as a profession anymore, but there will continue to be mathematicians. What we'll do to make a living when that day comes, I have no idea. I suspect many others will also have to figure that out soon.
[0] I've seen this attributed to the Character of Physical Law but haven't confirmed it
> Perhaps most telling was the sadness expressed by several mathematicians regarding the increasing secrecy in AI research. Mathematics has long prided itself on openness and transparency, with results freely shared and discussed. The closing off of research at major AI labs—and the inability of collaborating mathematicians to discuss their work—represents a significant cultural clash with mathematical traditions. This tension recalls Michael Atiyah's warning against secrecy in research: "Mathematics thrives on openness; secrecy is anathema to its progress" (Atiyah, 1984).
Engineering has always involved large amounts of both math and secrecy, what's different now?
AI is undergoing a transition from academic research to industry engineering.
(But the engineers want the benefits of academic research -- going to conferences to give talks, credibility, intellectual prestige -- without paying the costs, e.g. actually sharing new knowledge and information.)
Especially not mathematicians! No one goes into math academia for the money, and people with math Ph.D.'s are often very employable at much higher salaries if they jump ship to industry. The reason mathematicians stay in the field --- and I say this as someone who didn't stay, for a variety of reasons --- is because they love math and want to spend their time researching and teaching it.
I work with the ones that made the jump to industry, so no, I'm confronted with the divide day in and day out. The academics that either switch to industry or maintain close industry ties, typically do not seem to share these concerns, or at least, can contextualize them.
Nice article. I didn't read every section in detail but I think it makes a good point that AI researchers maybe focus too much on the thought of creating new mathematics while being able to repdroduce, index or formalize existing mathematics is really they key goal imo. This will then also lead to new mathematics. I think the more you advance in mathematical maturity the bigger the "brush" becomes with which you make your strokes. As an undergrad a stroke can be a single argument in a proof, or a simple Lemma. As a professor it can be a good guess for a well-posedness strategy for a PDE. I think AI will help humans find new mathematics with much bigger brush strokes. If you need to generalize a specific inequality on the whole space to Lipschitz domains, perhaps AI will give you a dozen pages, perhaps even of formalized Lean, in a single stroke. If you are a scientist and consider an ODE model, perhaps AI can give you formally verified error and convergence bounds using your specific constants. You switch to a probabilistic setting? Do not worry. All of these are examples of not very deep but tedious and non-trivial mathematical busywork that can take days or weeks. The mathematical ability necessary to do this has in my opinion already been demonstrated by o3 in rare cases. It can not piece things together yet though. But GPT-4 could not piece together proofs to undergrad homework problems while o3 now can. So I believe improvement is quite possible.
My take is a bit different. I only have a math undergrad and only worked as an AI trainer so I’m quite “low” on the totem pole.
I have listened to colin Mclarty talk about philosophy of math and there was a contingent of mathematicians who solely cared about solving problems via “algorithms”. The time period was just preceding the modern math since the late 1800s roughly, where the algorithmists, intuitivists, and logical oriented mathematicians coalesced into a combination that includes intuitive, algorithmic, and importance of logic, leading to the modern way we do proofs and focus on proofs.
These algorithmists didn’t care about the so called “meaningless” operations that got an answer, they just cared they got useful results.
I think the article mitigates this side of math, and is the side AI will be best or most useful at. Having read AI proofs, they are terrible in my opinion. But if AI can prove something useful even if the proof is grossly unappealing to the modern mathematician, there should be nothing to clamor about.
Is it really a culture divide or is it an economic incentives divide? Many AI researchers are mathematicians. Any theoretical AI research paper will typically be filled with eye-wateringly dense math. AI dissolves into math the closer you inspect it. It's math all the way down. What differs are the incentives. Math rewards openness because there's no real concept of a "competitive edge", you're incentivized to freely publish and share your results as that is how you get recognition and hopefully a chance to climb the academic ladder. (Maybe there might be a competitive spirit between individual mathematicians working on the same problems, but this is different than systemic market competition.) AI is split between being a scientific and capitalist pursuit; sharing advances can mean the difference between making a fortune or being outmaneuvered by competitors. It contaminates the motives. This is where the AI researcher's typical desire for "novel results" comes from as well, they are inheriting the values of industry to produce economic innovations.
It's a tidier explanation to tie the culture differences to material motive.
> Many AI researchers are mathematicians. Any theoretical AI research paper will typically be filled with eye-wateringly dense math. AI dissolves into math the closer you inspect it. It's math all the way down.
There is a major caveat here. Most 'serious math' in AI papers is wrong and/or irrelevant!
It's even the case for famous papers. Each lemma in Kingma and Ba's ADAM optimization paper is wrong, the geometry in McInnes and Healy's UMAP paper is mostly gibberish, etc...
I think it's pretty clear that AI researchers (albeit surely with some exceptions) just don't know how to construct or evaluate a mathematical argument. Moreover the AI community (at large, again surely with individual exceptions) seems to just have pretty much no interest in promoting high intellectual standards.
> This quest for deep understanding also explains a common experience for mathematics graduate students: asking an advisor a question, only to be told, "Read these books and come back in a few months."
With AI advisor I do not have this problem. It explains parts I need, in a way I understand. If I study some complicated topic, AI shortens it from months to days.
I was somehow mathematically gifted when younger, sadly I often reinvented my own math, because I did not even know this part of math existed. Watching how Deepseek thinks before answering, is REALLY beneficial. It gives me many hints and references. Human teachers are like black boxes while teaching.
My point is human advisor does not have enough time, to answer questions and correctly explain the subject. I may get like 4 hours a week, if lucky. Books are just a cheap substitute for real dialog and reasoning with a teacher.
Most ancient philosophy papers were in form of dialog. It is much faster to explain things.
AI is a game changer. It shortens feedback loop from a week to hour! It makes mistakes (as humans do), but it is faster to find them. And it also develops cognitive skills while finding them.
It is like programming in low level C in notepad 40 years ago. Versus high level language with IDE, VCS, unit tests...
Or like farming resources in Rust. Booring repetitive grind...
Books aren't just a lower quality version of dialog with a person though. They operate entirely differently. With very few people can you think quietly for 30 minutes straight without talking, but with a book you can put it down and come back to it at will.
The feedback loop for programming / mathematics / other things I've studied was not a week in the year 2019. In that ancient time the feedback look was maybe 10% slower than with any of these LLMs since you had to look at Google search.
> One question generated particular concern: what would happen if an AI system produced a proof of a major conjecture like the Riemann Hypothesis, but the proof was too complex for humans to understand? Would such a result be satisfying? Would it advance mathematical understanding? The consensus seemed to be that while such a proof might technically resolve the conjecture, it would fail to deliver the deeper understanding that mathematicians truly seek.
I think this is an interesting question. In a hypothetical SciFi world where we somehow provably know that AI is infallible and the results are always correct, you could imagine mathematicians grudgingly accepting some conjecture as "proven by AI" even without understanding the why.
But for real-world AI, we know it can produce hallucinations and its reasoning chains can have massive logical errors. So if it came up with a proof that no one understands, how would we even be able to verify that the proof is indeed correct and not just gibberish?
Or more generally, how do you verify a proof that you don't understand?
Serious theorem-proving AIs always write the proof in a formal syntax where it is possible to check that the proof is correct without issue. The most popular such formal language is Lean, but there are many others. It's just like having a coding AI, it may write some function and you check if it compiles. If the AI writes a program/proof in Lean, it will only compile if the proof is correct. Checking the correctness of proofs is a much easier problem than coming up with the proof in the first place.
oersted's answer basically covers it, so I'm mostly just agreeing with them: the answer is that you use a computer. Not another AI model, but a piece of regular, old-fashioned software that has much more in common with a compiler than an LLM. It's really pretty closely analogous to the question "How do you verify that some code typechecks if you don't understand it?"
In this hypothetical Riemann Hypothesis example, the only thing the human would have to check is that (a) the proof-verification software works correctly, and that (b) the statement of the Riemann Hypothesis at the very beginning is indeed a statement of the Riemann Hypothesis. This is orders of magnitude easier than proving the Riemann Hypothesis, or even than following someone else's proof!
> Or more generally, how do you verify a proof that you don't understand?
This is the big question! Computer-aided proof has been around forever. AI seems like just another tool from that box. Albeit one that has the potential to provide 'human-friendly' answers, rather than just a bunch of symbolic manipulation that must be interpreted.
> Throughout the conference, I noticed a subtle pressure on presenters to incorporate AI themes into their talks, regardless of relevance.
This is well-studied and not unique to AI, the USA in English, or even Western traditions. Here is what I mean: a book called Diffusion of Innovations by Rogers explains a history of technology introduction.. if the results are tallied in population, money or other prosperity, the civilizations and their language groups that have systematic ways to explore and apply new technology are "winners" in the global context.
AI is a powerful lever. The meta-conversation here might be around concepts of cancer, imbalance and chairs on the deck of the Titanic.. but this is getting off-topic for maths.
I think another way to think about this is that subtly trying to consider AI in your AI-unrelated research is just respecting the bitter lesson. You need to at least consider how a data-driven approach might work for your problem. It could totally wipe you out - make your approach pointless. That's the bitter lesson.
Mathematics is, IMO, not the axioms, proofs, or theorems. It's the human process of organizing these things into conceptual taxonomies that appeal to what is ultimately an aesthetic sensibility (what "makes sense"), updating those taxonomies as human understanding and aesthetic preferences evolve, as well as practical considerations ('application'). Generating proofs of a statement is like a biologist identifying a new species, critical but also just the start of the work. It's the macropatterns connecting the organisms that lead to the really important science, not just the individual units of study alone.
And it's not that AI can't contribute to this effort. I can certainly see how a chatbot research partner could be super valuable for lit review, brainstorming, and even 'talking things through' (much like mathematicians get value from talking aloud). This doesn't even touch on the ability to generate potentially valid proofs, which I do think has a lot of merit. But the idea that we could totally outsource the work to a generative model seems impossible by definition. The point of the labor is develop human understanding, removing the human from the loop changes the nature of the endeavor entirely (basically to algorithm design).
Similar stuff holds about art (at a high level, and glossing over 'craft art'); IMO art is an expressive endeavor. One person communicating a hard-to-express feeling to an audience. GenAI can obviously create really cool pictures, and this can be grist for art, but without some kind of mind-to-mind connection and empathy the picture is ultimately just an artifact. The human context is what turns the artifact into art.
AI is young, and at the center of the industry spotlight, so it attracts a lot of people who are not in it to understand anything. It's like when the whole world got on the Internet, and the culture suddenly shifted. It's a good thing; you just have to dress up your work in the right language, and you can get funding, like when Richard Bellman coined the term "dynamic programming" to make it palatable to the Secretary of Defense, Charles Wilson.
Or 1949 if you consider the Turing Test, or 1912 if you consider Torres Quevedo's machine El Ajedrecista that plays rook endings. The illusion of AI dates back to 1770's The Turk.
I'm a former research mathematician who worked for a little while in AI research, and this article matched up very well with my own experience with this particular cultural divide. Since I've spent a lot more time in the math world than the AI world, it's very natural for me to see this divide from the mathematicians' perspective, and I definitely agree that a lot of the people I've talked to on the other side of this divide don't seem to quite get what it is that mathematicians want from math: that the primary aim isn't really to find out whether a result is true but why it's true.
To be honest, it's hard for me not to get kind of emotional about this. Obviously I don't know what's going to happen, but I can imagine a future where some future model is better at proving theorems than any human mathematician, like the situation, say, chess has been in for some time now. In that future, I would still care a lot about learning why theorems are true --- the process of answering those questions is one of the things I find the most beautiful and fulfilling in the world --- and it makes me really sad to hear people talk about math being "solved", as though all we're doing is checking theorems off of a to-do list. I often find the conversation pretty demoralizing, especially because I think a lot of the people I have it with would probably really enjoy the thing mathematics actually is much more than the thing they seem to think it is.
I’ve worked in tech my entire adult life and boy do I feel this deep in my soul. I have slowly withdrawn from the higher-level tech designs and decision making. I usually disagree with all of it. Useless pursuits made only for resume fodder. Tech decisions made based on the bonus the CTO gets from the vendors (Superbowl tickets anyone?) not based on the suitability of the tech.
But absolutely worst of all is the arrogance. The hubris. The thinking that because some human somewhere has figured a thing out that its then just implicitly known by these types. The casual disregard for their fellow humans. The lack of true care for anything and anyone they touch.
Move fast and break things!! Even when its the society you live in.
That arrogance and/or hubris is just another type of stupidity.
> Move fast and break things!! Even when its the society you live in.
This is the part I don't get honestly
Are people just very shortsighted and don't see how these changes are potentially going to cause upheaval?
Do they think the upheaval is simply going to be worth it?
Do they think they will simply be wealthy enough that it won't affect them much, they will be insulated from it?
Do they just never think about consequences at all?
I am trying not to be extremely negative about all of this, but the speed of which things are moving makes me think we'll hit the cliff before even realizing it is in front of us
That's the part I find unnerving
> Do they think they will simply be wealthy enough that it won't affect them much, they will be insulated from it?
Yes, partly that. Mostly they only care about their rank. Many people would burn down the country if it meant they could be king of the ashes. Even purely self-interested people should welcome a better society for all, because a rising tide lifts all boats. But not only are they selfish, they're also very stupid, at least in this aspect. They can't see the world as anything but zero sum, and themselves as either winning or losing, so they must win at all costs. And those costs are huge.
Reminds me of the Paradise Lost quote, "Better to rule in Hell, than serve in Heaven", such an insightful book for understanding a certain type of person from Milton. Beautiful imagery throughout too, highly recommend.
[dead]
> Do they just never think about consequences at all?
Yes, I think this is it. Frequently using social media and being “online” leads to less critical thought, less thinking overall, smaller window that you allow yourself to think in, thoughts that are merely sound bites not fully fleshed out thoughts, and so on. Ones thoughts can easily become a milieu of memes and falsehoods. A person whose mind is in the state will do whatever anyone suggests for that next dopamine hit!
I am guilty of it all myself which is how I can make this claim. I too fear for humanity’s future.
> Are people just very shortsighted and don't see how these changes are potentially going to cause upheaval?
> Do they think the upheaval is simply going to be worth it?
All technology causes upheaval. We've benefited from many of these upheavals to the point where it's impossible for most to imagine a world without the proliferation of movable type, the internal combustion engine, the computer, or the internet. All of your criticisms could have easily been made word for word by the Catholic Church during the medieval era. The "society" of today is no more of a sacred cow than its antecedent incarnations were half a millenium ago. As history has shown, it must either adapt, disperse, or die.
> The "society" of today is no more of a sacred cow than its antecedent incarnations were half a millenium ago. As history has shown, it must either adapt, disperse, or die
I am not concerned about some kind of "sacred cow" that I want to preserve
I am concerned about a future where those with power no longer need 90% of the population so they deploy autonomous weaponry that grinds most of the population into fertilizer
And I'm concerned there are a bunch of short sighted idiots gleefully building autonomous weaponry for them, thinking they will either be spared from mulching, or be the mulchers
Edit: The thing about appealing to history is that it also shows that when upper classes get too powerful they start to lose touch with everyone else, and this often leads to turmoil that affects the common folk most
As one of the common folk, I'm pretty against that
Exactly. It was described in Chesterton’s Fence:
There exists in such a case a certain institution or law; let us say, for the sake of simplicity, a fence or gate erected across a road. The more modern type of reformer goes gaily up to it and says, “I don’t see the use of this; let us clear it away.” To which the more intelligent type of reformer will do well to answer: “If you don’t see the use of it, I certainly won’t let you clear it away. Go away and think. Then, when you can come back and tell me that you do see the use of it, I may allow you to destroy it.”
I've called this out numerous times (and gotten downvoted regularly), with what I call the "Cult of Optimization"
aka optimization-for-its-own-sake, aka pathological optimization.
It's basically meatspace internalizing and adopting the paperclip problem as a "good thing" to pursue, screw externalities and consequences.
And, lo-and-behold, my read for why it gets downvoted here is that a lot of folks on HN ascribe to this mentality, as it is part of the HN ethos to optimize , often pathologically.
Love your point. "Lack of alignment" affects more than just AIs.
Humans like to solve problems and be at the top of the heap. Such is life, survival of the fittest after all. AI is a problem to solve, whoever gets to AGI first will be at the top of the heap. It's a hard drive to turn off.
In theory this is actually pretty easy to "turn off"
You flatten the heap
You decrease or eliminate the reward for being at the top
You decrease or eliminate the penalty for being at the bottom
The main problem is that we haven't figured out a good way to do this without creating a whole bunch of other problems
> But absolutely worst of all is the arrogance. The hubris. The thinking that because some human somewhere has figured a thing out that its then just implicitly known by these types.
I worked in an organization afflicted by this and still have friends there. In the case of that organization, it was caused by an exaggerated glorification of management over ICs. Managers truly did act according to the belief, and show every evidence of sincerely believing in it, that their understanding of every problem was superior to the sum of the knowledge and intelligence of every engineer under them in the org chart, not because they respected their engineers and worked to collect and understand information from them, but because managers are a higher form of humanity than ICs, and org chart hierarchy reflects natural superiority. Every conversation had to be couched in terms that didn't contradict those assumptions, so the culture had an extremely high tolerance for hand-waving and BS. Naturally this created cover for all kinds of selfish decisions based on politics, bonuses, and vendor perks. I'm very glad I got out of there.
I wouldn't paint all of tech with the same brush, though. There are many companies that are better, much better. Not because they serve higher ideals, but because they can't afford to get so detached from reality, because they'd fail if they didn't respect technical considerations and respect their ICs.
Interestingly, the main article mentions Bill Thurston's paper "On Proof and Progress in Mathematics" (https://www.math.toronto.edu/mccann/199/thurston.pdf), but doesn't mention a quote from that paper that captures the essence of what you wrote:
> "The rapid advance of computers has helped dramatize this point, because computers and people are very different. For instance, when Appel and Haken completed a proof of the 4-color map theorem using a massive automatic computation, it evoked much controversy. I interpret the controversy as having little to do with doubt people had as to the veracity of the theorem or the correctness of the proof. Rather, it reflected a continuing desire for human understanding of a proof, in addition to knowledge that the theorem is true."
Incidentally, I've also a similar problem when reviewing HCI and computer systems papers. Ok sure, this deep learning neural net worked better, but what did we as a community actually learn that others can build on?
The Four Color Theorem is a great example! I think this story is often misrepresented as one where mathematicians didn't believe the computer-aided proof. Thurston gets the story right: I think basically everyone in the field took it as resolving the truth of the Four Color Theorem --- although I don't think this was really in serious doubt --- but in an incredibly unsatisfying way. They wanted to know what underlying pattern in planar graphs forces them all to be 4-colorable, and "well, we reduced the question to these tens of thousands of possible counterexamples and they all turned out to be 4-colorable" leaves a lot to be desired as an answer to that question. (This is especially true because the Five Color Theorem does have a very beautiful proof. I reach at a math enrichment program for high schoolers on weekends, and the result was simple enough that we could get all the way through it in class.)
I'm not a mathematician so please feel free to correct me...but wouldn't there still be an opportunity for humans to try to understand why a proof solved by a machine is true? Or are you afraid that the culture of mathematics will shift towards being impatient about this sorts of questions?
Well, it depends on exactly what future you were imagining. In a world where the model just spits out a totally impenetrable but formally verifiable Lean proof, then yes, absolutely, there's a lot for human mathematicians to do. But I don't see any particular reason things would have to stop there: why couldn't some model also spit out nice, beautiful explanations of why the result is true? We're certainly not there yet, but if we do get there, human mathematicians might not really be producing much of anything. What reason would there be to keep employing them all?
Like I said, I don't have any idea what's going to happen. The thing that makes me sad about these conversations is that the people I talk to sometimes don't seem to have any appreciation for the thing they say they want to dismantle. It might even be better for humanity on the whole to arrive in this future; I'm not arguing that one way or the other! Just that I think there's a chance it would involve losing something I really love, and that makes me sad.
I don’t think the advent of superintelligence will lead to increased leisure time and increased well-being / easier lives. However, if it did I wouldn’t mind redundantly learning the mathematics with the help of the AI. It’s intrinsically interesting and ultimately I don’t care to impress anybody, except to the extent it’s necessary to be employable.
I would love that too. In fact, I already spend a good amount of my free time redundantly learning the mathematics that was produced by humans, and I have fun doing it. The thing that makes me sad to imagine --- and again, this is not a prediction --- is the loss of the community of human mathematicians that we have right now.
That is kind of hard to do. Human reasoning and computer reasoning is very different, enough so that we can't really grasp it. Take chess, for example. Humans tend to reason in terms of positions and tactics, but computers just brute force it (I'm ignoring stuff like Alpha Zero because computers were way better than us even before that). There isn't much to learn there, so GMs just memorize the computer moves for so and so situation and then go back to their past heuristics after n moves
> so GMs just memorize the computer moves for so and so situation and then go back to their past heuristics after n moves
I think they also adjust their heuristics, based on looking at thousands of computer moves.
As Feynman once said [0]: "Physics is like sex. Sure, it may give some practical results, but that's not why we do it." I don't think it's any different for mathematics, programming, a lot of engineering, etc.
I can see a day might come when we (research mathematicians, math professors, etc) might not exist as a profession anymore, but there will continue to be mathematicians. What we'll do to make a living when that day comes, I have no idea. I suspect many others will also have to figure that out soon.
[0] I've seen this attributed to the Character of Physical Law but haven't confirmed it
> Perhaps most telling was the sadness expressed by several mathematicians regarding the increasing secrecy in AI research. Mathematics has long prided itself on openness and transparency, with results freely shared and discussed. The closing off of research at major AI labs—and the inability of collaborating mathematicians to discuss their work—represents a significant cultural clash with mathematical traditions. This tension recalls Michael Atiyah's warning against secrecy in research: "Mathematics thrives on openness; secrecy is anathema to its progress" (Atiyah, 1984).
Engineering has always involved large amounts of both math and secrecy, what's different now?
AI is undergoing a transition from academic research to industry engineering.
(But the engineers want the benefits of academic research -- going to conferences to give talks, credibility, intellectual prestige -- without paying the costs, e.g. actually sharing new knowledge and information.)
The academics didn't make the money and are sad about it. Tale as old as time.
No offense intended, but this is a pattern I've seen over and over again
I get the feeling you've never really talked to many academics.
Especially not mathematicians! No one goes into math academia for the money, and people with math Ph.D.'s are often very employable at much higher salaries if they jump ship to industry. The reason mathematicians stay in the field --- and I say this as someone who didn't stay, for a variety of reasons --- is because they love math and want to spend their time researching and teaching it.
I work with the ones that made the jump to industry, so no, I'm confronted with the divide day in and day out. The academics that either switch to industry or maintain close industry ties, typically do not seem to share these concerns, or at least, can contextualize them.
Academics that decided they didn't really want to be academics, but instead switched to industry. Seems like a significant sample bias, no?
They're still academics...
I miss when money people treated my motivations like a black box.
Nice article. I didn't read every section in detail but I think it makes a good point that AI researchers maybe focus too much on the thought of creating new mathematics while being able to repdroduce, index or formalize existing mathematics is really they key goal imo. This will then also lead to new mathematics. I think the more you advance in mathematical maturity the bigger the "brush" becomes with which you make your strokes. As an undergrad a stroke can be a single argument in a proof, or a simple Lemma. As a professor it can be a good guess for a well-posedness strategy for a PDE. I think AI will help humans find new mathematics with much bigger brush strokes. If you need to generalize a specific inequality on the whole space to Lipschitz domains, perhaps AI will give you a dozen pages, perhaps even of formalized Lean, in a single stroke. If you are a scientist and consider an ODE model, perhaps AI can give you formally verified error and convergence bounds using your specific constants. You switch to a probabilistic setting? Do not worry. All of these are examples of not very deep but tedious and non-trivial mathematical busywork that can take days or weeks. The mathematical ability necessary to do this has in my opinion already been demonstrated by o3 in rare cases. It can not piece things together yet though. But GPT-4 could not piece together proofs to undergrad homework problems while o3 now can. So I believe improvement is quite possible.
My take is a bit different. I only have a math undergrad and only worked as an AI trainer so I’m quite “low” on the totem pole.
I have listened to colin Mclarty talk about philosophy of math and there was a contingent of mathematicians who solely cared about solving problems via “algorithms”. The time period was just preceding the modern math since the late 1800s roughly, where the algorithmists, intuitivists, and logical oriented mathematicians coalesced into a combination that includes intuitive, algorithmic, and importance of logic, leading to the modern way we do proofs and focus on proofs.
These algorithmists didn’t care about the so called “meaningless” operations that got an answer, they just cared they got useful results.
I think the article mitigates this side of math, and is the side AI will be best or most useful at. Having read AI proofs, they are terrible in my opinion. But if AI can prove something useful even if the proof is grossly unappealing to the modern mathematician, there should be nothing to clamor about.
This is the talk I have in mind https://m.youtube.com/watch?v=-r-qNE0L-yI&pp=ygUlQ29saW4gbWN...
Is it really a culture divide or is it an economic incentives divide? Many AI researchers are mathematicians. Any theoretical AI research paper will typically be filled with eye-wateringly dense math. AI dissolves into math the closer you inspect it. It's math all the way down. What differs are the incentives. Math rewards openness because there's no real concept of a "competitive edge", you're incentivized to freely publish and share your results as that is how you get recognition and hopefully a chance to climb the academic ladder. (Maybe there might be a competitive spirit between individual mathematicians working on the same problems, but this is different than systemic market competition.) AI is split between being a scientific and capitalist pursuit; sharing advances can mean the difference between making a fortune or being outmaneuvered by competitors. It contaminates the motives. This is where the AI researcher's typical desire for "novel results" comes from as well, they are inheriting the values of industry to produce economic innovations. It's a tidier explanation to tie the culture differences to material motive.
> Many AI researchers are mathematicians. Any theoretical AI research paper will typically be filled with eye-wateringly dense math. AI dissolves into math the closer you inspect it. It's math all the way down.
There is a major caveat here. Most 'serious math' in AI papers is wrong and/or irrelevant!
It's even the case for famous papers. Each lemma in Kingma and Ba's ADAM optimization paper is wrong, the geometry in McInnes and Healy's UMAP paper is mostly gibberish, etc...
I think it's pretty clear that AI researchers (albeit surely with some exceptions) just don't know how to construct or evaluate a mathematical argument. Moreover the AI community (at large, again surely with individual exceptions) seems to just have pretty much no interest in promoting high intellectual standards.
> This quest for deep understanding also explains a common experience for mathematics graduate students: asking an advisor a question, only to be told, "Read these books and come back in a few months."
With AI advisor I do not have this problem. It explains parts I need, in a way I understand. If I study some complicated topic, AI shortens it from months to days.
I was somehow mathematically gifted when younger, sadly I often reinvented my own math, because I did not even know this part of math existed. Watching how Deepseek thinks before answering, is REALLY beneficial. It gives me many hints and references. Human teachers are like black boxes while teaching.
I think you’re missing the point of what the advisor is saying.
No, I get it.
My point is human advisor does not have enough time, to answer questions and correctly explain the subject. I may get like 4 hours a week, if lucky. Books are just a cheap substitute for real dialog and reasoning with a teacher.
Most ancient philosophy papers were in form of dialog. It is much faster to explain things.
AI is a game changer. It shortens feedback loop from a week to hour! It makes mistakes (as humans do), but it is faster to find them. And it also develops cognitive skills while finding them.
It is like programming in low level C in notepad 40 years ago. Versus high level language with IDE, VCS, unit tests...
Or like farming resources in Rust. Booring repetitive grind...
Books aren't just a lower quality version of dialog with a person though. They operate entirely differently. With very few people can you think quietly for 30 minutes straight without talking, but with a book you can put it down and come back to it at will.
I don't think professional programmers were using notepad in 1985. Here's talk of IDEs from an article from 1985: https://dl.acm.org/doi/10.1145/800225.806843 It mentions Xerox Development Environment, from 1977 https://en.wikipedia.org/wiki/Xerox_Development_Environment
The feedback loop for programming / mathematics / other things I've studied was not a week in the year 2019. In that ancient time the feedback look was maybe 10% slower than with any of these LLMs since you had to look at Google search.
> One question generated particular concern: what would happen if an AI system produced a proof of a major conjecture like the Riemann Hypothesis, but the proof was too complex for humans to understand? Would such a result be satisfying? Would it advance mathematical understanding? The consensus seemed to be that while such a proof might technically resolve the conjecture, it would fail to deliver the deeper understanding that mathematicians truly seek.
I think this is an interesting question. In a hypothetical SciFi world where we somehow provably know that AI is infallible and the results are always correct, you could imagine mathematicians grudgingly accepting some conjecture as "proven by AI" even without understanding the why.
But for real-world AI, we know it can produce hallucinations and its reasoning chains can have massive logical errors. So if it came up with a proof that no one understands, how would we even be able to verify that the proof is indeed correct and not just gibberish?
Or more generally, how do you verify a proof that you don't understand?
Serious theorem-proving AIs always write the proof in a formal syntax where it is possible to check that the proof is correct without issue. The most popular such formal language is Lean, but there are many others. It's just like having a coding AI, it may write some function and you check if it compiles. If the AI writes a program/proof in Lean, it will only compile if the proof is correct. Checking the correctness of proofs is a much easier problem than coming up with the proof in the first place.
oersted's answer basically covers it, so I'm mostly just agreeing with them: the answer is that you use a computer. Not another AI model, but a piece of regular, old-fashioned software that has much more in common with a compiler than an LLM. It's really pretty closely analogous to the question "How do you verify that some code typechecks if you don't understand it?"
In this hypothetical Riemann Hypothesis example, the only thing the human would have to check is that (a) the proof-verification software works correctly, and that (b) the statement of the Riemann Hypothesis at the very beginning is indeed a statement of the Riemann Hypothesis. This is orders of magnitude easier than proving the Riemann Hypothesis, or even than following someone else's proof!
> Or more generally, how do you verify a proof that you don't understand?
This is the big question! Computer-aided proof has been around forever. AI seems like just another tool from that box. Albeit one that has the potential to provide 'human-friendly' answers, rather than just a bunch of symbolic manipulation that must be interpreted.
> Throughout the conference, I noticed a subtle pressure on presenters to incorporate AI themes into their talks, regardless of relevance.
This is well-studied and not unique to AI, the USA in English, or even Western traditions. Here is what I mean: a book called Diffusion of Innovations by Rogers explains a history of technology introduction.. if the results are tallied in population, money or other prosperity, the civilizations and their language groups that have systematic ways to explore and apply new technology are "winners" in the global context.
AI is a powerful lever. The meta-conversation here might be around concepts of cancer, imbalance and chairs on the deck of the Titanic.. but this is getting off-topic for maths.
I think another way to think about this is that subtly trying to consider AI in your AI-unrelated research is just respecting the bitter lesson. You need to at least consider how a data-driven approach might work for your problem. It could totally wipe you out - make your approach pointless. That's the bitter lesson.
> The last mathematicians considered to have a comprehensive view of the field were Hilbert and Poincaré, over a century ago.
Henri Cartan of the Bourbaki had not only a more comprehensive view, but a greater scope of the potential of mathematical modeling and description
Mathematics is, IMO, not the axioms, proofs, or theorems. It's the human process of organizing these things into conceptual taxonomies that appeal to what is ultimately an aesthetic sensibility (what "makes sense"), updating those taxonomies as human understanding and aesthetic preferences evolve, as well as practical considerations ('application'). Generating proofs of a statement is like a biologist identifying a new species, critical but also just the start of the work. It's the macropatterns connecting the organisms that lead to the really important science, not just the individual units of study alone.
And it's not that AI can't contribute to this effort. I can certainly see how a chatbot research partner could be super valuable for lit review, brainstorming, and even 'talking things through' (much like mathematicians get value from talking aloud). This doesn't even touch on the ability to generate potentially valid proofs, which I do think has a lot of merit. But the idea that we could totally outsource the work to a generative model seems impossible by definition. The point of the labor is develop human understanding, removing the human from the loop changes the nature of the endeavor entirely (basically to algorithm design).
Similar stuff holds about art (at a high level, and glossing over 'craft art'); IMO art is an expressive endeavor. One person communicating a hard-to-express feeling to an audience. GenAI can obviously create really cool pictures, and this can be grist for art, but without some kind of mind-to-mind connection and empathy the picture is ultimately just an artifact. The human context is what turns the artifact into art.
AI is young, and at the center of the industry spotlight, so it attracts a lot of people who are not in it to understand anything. It's like when the whole world got on the Internet, and the culture suddenly shifted. It's a good thing; you just have to dress up your work in the right language, and you can get funding, like when Richard Bellman coined the term "dynamic programming" to make it palatable to the Secretary of Defense, Charles Wilson.
AI has been around since at least the 1970s.
Not in any way that is relevant to the conversation about AI that has exploded this decade
Or 1949 if you consider the Turing Test, or 1912 if you consider Torres Quevedo's machine El Ajedrecista that plays rook endings. The illusion of AI dates back to 1770's The Turk.
Yes, and all of these dates would be considered "young" by most mathematicians!
[flagged]