Could you expand on this? I don't see maths as a language for quantities specifically (i.e. what does symmetry have to do with quantities).
> just too tedious (but not impossible) for a human being to work through the proof.
Already happened with the four colour theorem arguably.
That’s easily proven to be true. “Two plus two equals four” is a theorem, so is “three plus three equals six”, etc.
This is a somewhat bleak picture of math. We also have the other phenomena of increasing simplicity. Both statements and proofs becoming more straightforward and simple after one has access to deeper mathematical constructions.
For example : Bezout's theorem would like to state that two curves of degree m, degree n would intersect in mn points. Except that you have two parallel lines intersecting at 0 instead of 1.1 =1 point, two disjoint circles intersect at 0 instead of 2.2=4 points, a line tangent to a circle intersecting at 1 point instead of 1.2=2 points. These exceptions merge into a simple picture once one goes to projective space, complex numbers and schemes. Complex numbers lead to lots of other instances of simplicity.
Similarly, proofs can become simple where before one had complicated ad-hoc reasoning.
Feynman once made the same point of laws of physics where in contrast to someone figuring out rules of chess by looking at games where they first figure out basic rules(how pieces move) and then moves to complex exceptions(en passant, pawn promotion), what often happens in physics is that different sets of rules for apparently distinct phenomena become aspects of a unity (ex: heat, light, sound were seen as distinct things but now are all seen as movements of particles; unification of electricity and magnetism).
Of course, this unification pursuit is never complete. Mathematics books/papers constantly seem to pull a rabbit out of a hat. This leads to 'motivation' questions for why such a construction/expression/definition was made. For a few of those questions, the answer only becomes clear after more research.
I think you need to be careful taking about "infinite" in the context of math. If the number of quantities, relationships etc is finite, so are all their combinations. Even things like the infinit-ude of available numbers might have fixed patterns that render their relevant properties effecively finite, and lead to further distinctions e.g finite vs countable, etc.
Personally, I feel like math has a bit of a legacy problem. It holds on to the conventions of an art that is very old, with very different initial assumptions at its conception, and this is now holding it back somehow. I lack the background to effectivly demonstrate this other than "Things I know/understand seem less intutive in standard mathenatical terms" e.g. generating functions and/or integrals feel easier to understand (to me) when you understand the, to be software-like 'loops'.
In fact, the idea of "constructivist math" seems (again, to me) to beg for a more algorithmic/computational approach.
In any case, if we stick with Riemann sums, there should be a strong relationship to Generating Functions (which there is).
> Generating functions in code are basically a rote repetition of the mathematical definitions
GFs with a mathematical basis may have, for example, set-theoretic definitions that are not similar to, say, Turing machines. Any non-constructivist math is automatically not like code.
Also, I appreciate anonymity, but, to my point
> I live by myself in a remote mountain cave beyond the ken of civilised persons, and can only be contacted during a full moon, using certain arcane rites that are too horrible to speak of.
Okay.
= I live in California, and the nearest Starbucks is more than 20 miles away.
>>can only be contacted during a full moon
= As a night person, I am awake when the streetlight outside my house turns on.
>>certain arcane rites that are too horrible to speak of
= In order to contact me, you must install Microsoft Teams.
Overall, it's not that bad, except for the MS team thing. ;)
I don't literally live in a cave, but fortunately not everyone is so allergic to whimsical language :D.
https://en.wikipedia.org/wiki/Alexander_Grothendieck#Retirem...
"Local villagers helped sustain him with a more varied diet after he tried to live on a staple of dandelion soup." - like most people would.
About the substance, I agree that there are fair grounds for concern, and it's not just about mathematics.
The best case scenario is rejection and prohibition of uses of AI that fundamentally threaten human autonomy. It is theoretically possible to do so, but since capital and power are pro-AI[^1], getting there requires a social revolution that upends the current world order. Even if one were to happen, the results wouldn't last for too long. Unless said revolution were so utterly radical that would set us in a return trajectory to the middle ages (I have something of the sort published somewhere, check my profile!).
I'm an optimist when it comes to the enabling power of AI for a select few. But I'm a pessimist otherwise: if the richest nation on Earth can't educate its citizens, what hope is there that humans will be able to supervise and control AI for long? Given our current trajectory, if nothing changes, we are set for civilization catastrophe.
[^1]: Replacing expensive human labor is the most powerful modern economic incentive I know of. Money wants, money gets.
And I am envy of such skill because I like to think about myself as not entirely being stupid, still I would never be able to write/speak this way because I just do not have an aptitude towards that.
So I don't see any reason to worry about the impact of AI. Unlike most fields with AI worries, mathematical research isn't even a significant employment area, and people with jobs doing it could almost certainly be doing something else for more money.
But it did. Painter used to be a trade where you could sell your painting skills as, well, a skill applicable for other than purely aesthetic reasons, simply because there were no other ways to document the world around you. It just isn't anymore because of cameras. Professional oil portrait painter isn't a career in 2025.
Source? If anything I suspect there are more people making a living as painters now than at any point in history.
Is running an art/vocation comparable to photography and/or painting? We no longer have mailmen who run the length of the country afaik.
But running did heavily contribute to sedentary lifestyles in western countries, along with a bunch of other things.
> mathematical research isn't even a significant employment area
I agree, I think it will move from mathematicians "doing" math, to managing computerised system that do it instead. I'm sure we already have such systems.
I think far more important to humanity is improving mathemetical-literacy. From my perspective, math is made for mathematicians - it could be more accesible. As "pure" amth matures, there is still plenty opportunity in "applied" math (however you might define it).
Given that kind of picture of reality, it is little wonder that AI seems like such a profound threat to so many people (putting aside for the moment the distinction between the aspirations of AI companies and the actual affordances it possesses). If being human is to be an economic instrument, then any AI that could eliminate the economic value of human beings is something akin to extinction. The god of economics has no further need of you. You may die now.
But this utilitarian view of the world reeks of nihilism. It is the world of total work, of work for work's sake. We never inquire about the ends that are the very reason for work in the first place. We never come to an understanding that economies exists for us, that we create them for mutual benefit. And we never seem to grasp that the economic part of human life is only part of human life, that it exists for the sake of those parts of life, the more important and most important parts of life, that are not a matter of economics. We have come to view life as meaningless, so we run into the embrace of the god of economics, losing ourselves in its endless churn, its immediate goals, truncating our minds so that we do not conceive of anything else, longing to escape the horror of the abyss that awaits us outside of its dreary confines...
The point of studying something in a theoretical capacity is to understand it, not to produce something of economic value. Each person must come into understanding from a state of not understanding. Homo economicus does not comprehend this. Homo economicus lives to eat and shit and cum and to accumulate things.
This framework is explicitly enforced by copyright law. Because a copyright monopoly is automatically granted to every content creator, every person is automatically expected to participate in the copyright system.
Copyright law hinges on incompatibility. The easier it is to make compatible work, the easier it is to make derivative work, which copyright defines as the penultimate evil.
Generative statistical models (what everyone is calling AI) are calling this bluff harder than ever. Derivative work is easier than any time in history.
So what do we do about it? It's pretty obvious from my perspective that the best move forward is to eliminate copyright for everyone. It seems instead, that the most likely outcome is to eliminate copyright exclusively for the giant corporations that successfully launder their collaboration (derivative work) through large generative models.
Such libraries would need documentation, or nobody would know when to use them, and then sharing is pointless.
If corporations build them, they would have to decide what to contribute to the commons and what to keep private. But that’s no different than any other language.
I am not.
From the energy efficiency perspective human brain is very, very effective computational machine. Computers are not. Thinking about scale of infrastructure of network of computers being able to achieve similar capabilities and its energy consumption... it would be enormous. With big infrastructure comes high need of maintenance. This is costly and requires a lot of people just to prevent it from breaking down. With a lot of people being in one place, there socioeconomical cost, production, transportation needs to be build around such center. If you have centralized system, you are prone to attack from adversaries. In short I do not think we even close to what author is afraid of. We just closer to beginning to understand what is the need to actually start to think about building AI - if ever possible at all.
Can you explain why you think that? Very often, mechanical efficiency outperforms biological. Humans have existed for thougsands of years, neurons even longer. Computers and AI and relatively recent, we haven't really begun to explore optimisation possibilities.
I think looking at power consumption for the very edge of what technology is just barely capble of may be misleading, since that's inherently at one extreme of the current cost-capability trade-off curve[0] and stands to drop the most drastically from efficiency improvements.
You can now run models equivalent in capability to initial version of ChatGPT on sub-20w chips, for instance. Or, looking over a longer timeframe, we can now do far more on a 1-milliwatt chip[1] than on the 150kW ENIAC[2].
[0]: https://i.imgur.com/GydBGRG.png
[1]: https://spectrum.ieee.org/syntiant-chip-plays-doom
[2]: https://cse.engin.umich.edu/about/history/eniac-display/
The amount of parallelism in the human brain is enormous. Not just each neuron, but each synapse has computational capacity. That means ~10^14 computational units or 100 trillion processing units -- on about 20 watts.
That doesn't even touch the bandwidth issues. Getting the sensory input in and out of the brain plus the bandwidth to get all of the processing signals between each neuron is at least another petabit per second. So, on bandwidth capacity alone we are 25+ years away (assuming the last 25 years of growth continues). And in humans that comes with 18 years of training at that massive bandwidth and computational power.
Also, we have no idea what a general intelligence algorithm looks like. We are just now getting multimodal LLMs.
From the computational/bandwidth perspective we are still 30 years from a computer being able to process The information a single human brain does, except while consuming 29+ megawatts of energy. If you had to feed a human 29 megawatts worth of power no business would be profitable. Humans wouldn't even survive.
Sorry, but the notion that we are close to AGI because we have good word predictors is fantasy. But, there will be some amazing natural language human-computer interface improvements over the next 10 years!
With growing a brain, we barely know where to begin. Not in terms of growing a few neurons in a petri dish. Nourishing the complex interconnecting structure of neurons that is a human brain is nowhere even on the horizon. Much less growing the structure from cells. At least with the LLM/AI techniques we have control over the entire processing pipeline.
And I agree, that is an ethical minefield.
It's like calling a 1 bit half-adder circuit a computer.
Organoids are very interesting scientifically because we will need to start with organoids to grow any sort of biological system. And they do behave closer to native than individual cells so they can be used to research things like cell metabolism and drug response. But they are not anywhere close to an organ. And unfortunately they aren't even close enough to replace animal testing, yet.
I find it hard to think of a more ethically questionable programme!
> Computer chip with built-in human brain tissue gets military funding
The project called DishBrain was spun into the startup, Cortical Labs.
> World's first 'body in a box' biological computer uses human brain cells with silicon-based computing
> Cortical Labs said the CL1 will be available from June, priced at around $35,000.
> The use of human neurons in computing raises questions about the future of AI development. Biological computers like the CL1 could provide advantages over conventional AI models, particularly in terms of learning efficiency and energy consumption.
> Ethical concerns also arise from the use of human-derived brain cells in technology. While the neurons used in the CL1 are lab-grown and lack consciousness, further advancements in the field may require guidelines to address moral and regulatory issues.
What's the goalpost here though? modern "AI" stuff we previously thought not possible, Proper full human-brain simulation; or General form of higher AI that could come from either place?
> The amount of parallelism in the human brain is enormous.
That only demonstrates the possibilities yet to be explored. biology has millions-of-years head start; what's possible today could be balked out a few centuries ago by the same argument as yours. You say "We are just now getting multimodal LLMs" like it's somehow late.
At a fundemental level, what holds back biology is all the other things it does (ala staying alive) and the limits imposed (e.g. heat etc) that a purpose-made device can optimise on. Any physical, thermodynamical of communication-theoric argument over what's possible would hold back both biological and mechanical devices. Only there are fewer material constraints for machines - they can even explicity exploit quantum mechanics.
> Sorry, but the notion that we are close to AGI
Seems we are arguing different things. I went back through the thread, and believe the proposition is: "us, humanity, being able to build AI or something being very close to that", which I translate as a comment on our literal species. I took your statement "From the energy efficiency perspective human brain is very, very effective computational machine" as being in that scope, and not just a reference to the current era (or Decade!).
> That only demonstrates the possibilities yet to be explored. biology has millions-of-years head start; what's possible today could be balked out a few centuries ago by the same argument as yours.
Yes, we may only have a few centuries left to go before AGI. I was going with a few decades, but now that you mention it, a few centuries is more likely given we are running into Moore's Law limits with transistor technology.
> At a fundemental level, what holds back biology is all the other things it does (ala staying alive) and the limits imposed (e.g. heat etc) that a purpose-made device can optimise on.
You don't honestly believe that AGI will not have to deal with continuity, reliability, and heat dissipation issues that living things have to deal with, do you? All the more reason megawatts vs handful of watts is relevant. You just pointed out that it's not just an algorithmic optimization problem, but a much more complex problem of which we are barely scratching the surface.
> Seems we are arguing different things. I went back through the thread, and believe the proposition is: "us, humanity, being able to build AI or something being very close to that", which I translate as a comment on our literal species. I took your statement "From the energy efficiency perspective human brain is very, very effective computational machine" as being in that scope, and not just a reference to the current era (or Decade!).
I was replying to a literal statement about increased mechanical efficiency over biological efficiency. Which, in the case of AGI is completely inverted. Biological systems are so much more efficient that the comparison is embarrassing.
Also, I was saying our species is at least 3 decades from in-silico AGI. That doesn't mean we'll have some wild new tech that no one thought of next year. But the chances are so slim you might as well be saying we will genetically engineer flying pigs.
So you're questioning the above comment's argument based on a hand-wavy claim about completely speculative future possibilities?
As it stands, there's no disagreeing with the human brain's energy efficiency for all the computing it does in so many ways that AI can't even begin to match. This to not even speak of the whole unknown territory of whatever it is that gives us consciousness.
> whatever it is that gives us consciousness
talk about hand-wavy; "consciousness" might not be a real thing. You might as well ask if AI has a soul.
Also, there's nothing hand wavy about pointing out -aside from all the vastly efficient parallelism and generalist computing that the brain does with absurdly minimal power needs- that it also seems to be where our consciousness is housed.
You can go ahead and naval gaze about "how do we know if we're conscious? How do we know an LLM isn't?" but I certainly feel conscious, and so do you, and we both demonstrably have self-directed agency that indicates this widely, solidly accepted phenomenon, and is very distinct from anything any LLM can demonstrably do, regardless of what AIbros like to claim about consciousness not being real.
Arguments like these remind me of relativist fall-back idiocies of asking "but what is a spoon" whenever confronted with any hard counterargument to their completely speculative claims about X or Y.
That said, the article doesn't assume such a thing will happen soon, just that it may happen at some time in the future. That could be centuries away - I would still argue the end result is something to be concerned about.
The state space of mathematics is pretty different from chess, but I think ultimately, mathematicians are just running something like A* on the space of propositions, with a custom heuristic that is learned by approximating the result of running A* with that heuristic. where your error is just the difference between the actual and predicted length of proof.
Mathematics is just proof-driven development. For an spectator it might look like mathematics is about writing proofs, but that's not different than seeing a software developer write a lot of tests. The proofs are the best tools against insidious logic bugs that the society of mathematics has come up with in the last few hundred years. Mathematicians would welcome automating all the proofs, just like software engineers are happy for code assistants to take over the task of writing tests.
A_D_E_P_T•1mo ago
Also:
> To expand: what if the practice of mathematics becomes completely determined by the diktats of a vast capitalist machinery of proprietary machine learning models churning out proof after proof, and theory after theory, conjured from the aether of all possible true statements?
I don't think that this is possible even in theory, as computational resources are limited and "the aether of all possible true statements" is incomprehensively vast. (There's a massive orders-of-magnitude difference in size between true-seeming-yet-false statements and the number of elementary particles in the visible universe. More statements than particles.) You can't brute force it.
bryanrasmussen•1mo ago
auggierose•1mo ago
awanderingmind•1mo ago
andyjohnson0•1mo ago
I agree, but... Spend time formalising a large part of existing mathematics and proofs, train a bunch of sufficiently powerful and generative models with that, and with cooperative problem solving and proof strategies, and give them access to proof assistants and adequate compute resources, and something interesting could happen.
I suspect the barrier is finding a business model that would pay for this. Turning mathematics into an industrial, extruded-on-demand product might work, but I dont know who (except maybe the NSA) would stump-up the money.
n4r9•1mo ago
esperent•1mo ago
This could lead to the proof being rejected entirely, or fixed and strengthened.
Confirmation: if the AI understands it well enough that we're even considering asking it to confirm the proof, then you can do all kinds of things. You can ask it to simplify the entire proof to make it easier for humans to verify. You can ask it questions about parts of the proof you don't understand. You can ask it if there's any interesting corollaries or applications in other fields. Maybe you can even ask it to rewrite the whole thing in LEAN (although, like the author, I know nothing about LEAN and have no idea if this would be useful).
zarzavat•1mo ago
Rejecting a proof would be more complicated, because while for confirming a proof you only need to check that the main statement in the formalisation matches that of the conjecture, showing that a proof has been rejected requires knowledge of the proof itself (in general).
lblume•1mo ago
Why? If a proof is wrong it has to be locally invalid, i.e. draw some inference which is invalid according to rules of logic. Of course the antecedent could have been defined pages earlier, but in and of itself the error must be local, right?
IsTom•1mo ago
lblume•1mo ago
IsTom•1mo ago
awanderingmind•1mo ago