* empty products being 1 of course
Of course, I generally hear the fundamental theorem of arithmetic phrased as “every integer greater than one…” which is making its own little special case for the number 1.
Only the contrary: it is extremely inconvenient to not allow the product of an empty sequence of numbers to equal 1. The sum of an empty sequence is 0. The Baz of an empty sequence of numbers, for any monoid Baz, is the identity element of that monoid. Any other convention is going to be very painful and full of its own exceptions.
There are no exceptions to any rules here. 1 is not prime. Every positive integer can be expressed as the unique product of powers of primes. 1's expression is [], or 0000..., or ∅.
I meant that it’s inconvenient to require engaging with that concept directly in the everyday definition of prime numbers.
And note that this convention is not at all required for the point I'm making regarding prime numbers. As you say yourself, restrict the theorem to integers greater than 1, and you can forget about empty products (and it is still easier to state if 1 is not prime (which it isn't)).
Thay is enough justification for me of 1 not being prime. It has a factorisation!
And then C/C++ compilers are subtly inconsistent. If 0 is valid index, then null should correspond to uintptr_t(-1), not 0 address. That lead to non-trivial complication in OS implementations to make sure that the address 0 is not mapped as from hardware point of view 0 is absolutely normal address.
In the same way we index from 0 because indexing gets way more awkward if we index from 1.
In-band sentinels are both quite rare, and also equally convenient with -1 or 0. In fact I would say -1 is a bit more elegant because sometimes you need multiple sentinel values and then you can easily use -2 (what are you going to use 0 and 1 and then index from 2?).
The more common operations are things like indexing into flattened multidimensional arrays, or dealing with intervals, which are both way more elegant with 0-based indexing.
0 is a valid index into an array. It's even a valid index into global memory in some environments. Not mapping memory to address 0 is completely trivial. I'm not sure what non-trivial complications you're thinking of.
Depending on your definition of divisor, it excludes everything except 1 and -1, whose two integer divisors are 1 and -1. But then, if you specify that "divisor" means "positive integer divisor", it no longer automatically excludes the negative numbers, since the two positive integer divisors of -2 are 1 and 2. (Incidentally, plenty of algebraists, myself included, are perfectly comfortable with including -2 as a prime.)
At that time we can determine if 1 is prime.
If it’s found that Eratosthenes’ sieve is the only prime generating function then we have our answer.
Namely, if the sieve is the only generating function for all of the primes then 1 would need to be omitted as prime as removing its factors would remove every number, thus failing to generate the list of primes.
Nowadays 2 is considered prime. Seems silly to question why someone is claiming 2 is prime if that is how it is defined in modern day.
> What makes you think two is prime
The current mathematical definition of a prime number
I will admit that for me it was being brainwashed through years of high school and university mathematics.
0^0 = 1? Yes, it’s simpler that way.
0! = 1? Yes, it’s simpler that way.
0/0 = ∞? No, it’s undefined.
0.9999… = 1? Yes, it’s just two ways of expressing the same number.
1+2+3+… = -1/12? No, but if it did have a finite value, that’s what it would be.
Or even more abstract "every element on y". Which I think could sort of work
The other ones, sure, but I'm not following this one.
More a question of place-value representation systems than what most people are thinking of which is 1 - ε.
A slightly facetious answer might be that this is the wrong question to ask, and the right question is: when did 1 stop being a prime number? To which the answer is: some time between 1933 (when the 6th edition of Hardy's _A course in pure mathematics_ was published) and 1938 (when the 7th edition was published).
We only notice the case for 2 because our human languages happen to define divisible-by-2 as a word and concept. If our languages called divisible-by-3 "treven" or something like that, we'd think it weird that 3 was the only treven prime.
This is not one of those times.
But also mentioned elsewhere in the thread: if we declared 1 to be a prime, then many (I daresay "most") of our theorems would have to change "prime number" to "prime number greater than one".
1 = () = (0, 0, 0, 0, 0, ...)
2 = (1) = (1, 0, 0, 0, 0, ...)
3 = (0, 1)
4 = (2)
5 = (0, 0, 1)
6 = (1, 1)
7 = (0, 0, 0, 1)
8 = (3)
9 = (0, 2)
10 = (1, 0, 1)
The i th position in every tuple is the power of the i th prime in the factorization of that number. So 10 = (1, 0, 1) = 2^1 * 3^0 * 5^1. 84 would be (2, 1, 0, 1) = 2^2 * 3^1 * 5^0 * 7^1. If we have unique factorization, there is exactly one way to write every positive integer like this, and there are many insights we can gain from this factorization. If 1 is prime, then we can write 6 = 1^257 * 2^1 * 3^1, or any other power of 1 we like. We just gain nothing from it.There are often many equivalent ways to define any mathematical object, and I'm sure there are plenty of ways to define a prime number other than "its only factors are itself and 1". These other definitions are likely to obviously exclude 1. One obvious one is the set of basis coordinates in this "unique factorization" space that I just laid out here. And we're never really excluding or making a special case for 1, because 1's factorization is simply the absence of any powers -- empty set, all 0s, whatever you want to call it.
Keep in mind that "unique factorization" turns out to be very interesting in all sorts of other mathematical objects: rings, polynomials, symmetries, vector spaces, etc. They often have their own notion of "prime" or "primitive" objects and the correspondence with integer-primes is much cleaner if we don't consider 1 prime.
We could declare 4 to be a prime number, and keep the rest of the definition the same. Instead of just saying "no", you could ask, "okay, what would that do for us?" If there isn't a good answer, then what's the point? And usually, you're not in the 1% of 1% of 1%.
1, 2 and 3 are kind of special to me. In prime distribution studies, I discovered that they are special. It gets easier for some things if you consider primes only higher or equal to 5. Explaining distribution gets easier, some proofs become more obvious if you do that (tiny example: draw a ulam-like spiral around the numbers of an analog clock. 2 and 3 will become outliers and a distribution will reveal itself along the 1, 5, 7 and 11 diagonals).
Anyways, "only divisible by itself and 1" is a darn elegant definition.
I am fascinated by geometric proofs though. The clock thing is just a riff on Ulam's work. I believe there is more to it if one sees it as a geometric object and not just a visualization drawing. I could be wrong though.
I noticed the same as you, and IIRC the (some?) ancient greeks actually had an idea about 1 as not a number, but the unit that numbers were made of. So in a different class.
2 and 3 are also different, or rather all other primes from 5 and up are neighbours to a multiple of 6, (though not all such neighbours are primes of course).
In base-6 all those primes end in 5 or 1. What is the significance? I don't know. I remember that I started thinking that 2*3=6, maybe the sequence of primes is a result of the intertwining of numbersystems in multiple dimensions or whatever? Then I started thinking about the late republic instead. ;)
In two dimensions is easier.
I cannot rearrange one pebble.
I can rearrange two or three pebbles equidistant from each other in just one distinct way (inverting the position of a neighbouring pebble).
And so on...
There are many ways to think of natural numbers without actual numbers.
But thanks anyway! I learned a thing.
>>> def product(ints):
... result = 1
... for int in ints:
... result *= int
... return result
In which case there is no need to make 1 a prime as you already have: >>> product([])
1
1 x 1 x 1 = 1
...
Not prime!
This is debunked by https://ncatlab.org/nlab/show/too+simple+to+be+simple#relati...
Axioms are arbitrary. Use the axioms that are the most useful.
The question of whether or not the integer 1 is a prime doesn't make sense. The question is is it useful to define it as such and the answer is a resounding no.
JJMcJ•4h ago
reaperman•4h ago
gerdesj•4h ago
"...ignoring the trivial case of 1 being an obvious factor of every integer."
I remember quite a big chunk of GEB formally defining how integers are really not trivial! The main problem seems to be is that you soon end up with circular reasoning if you are not razor sharp with your definitions. That's just in an explainer book 8)
Then you have to define what factor means ...
Maxatar•4h ago
stouset•4h ago
gerdesj•4h ago
That's the GEB I mentioned above.
pinkmuffinere•3h ago
gerdesj•3h ago
Move GEB up the reading list right now! The edition I initially read was hard bound and was quite worn. I bought and read it again about 20 years ago and found more treasures.
It is a proper nerd grade treatise for non experts who are interested in maths, music and art. Really: maths, music and art from a mostly mathematical perspective. Hofstadter's writing style is very easy going and he is a master of clarity without complexity.
I don't think you need any more Maths than you would get up to age 18 or so at school to understand the entire book and probably less. Even if you gloss the formal Maths the book still works.
Maxatar•3h ago
https://risingentropy.com/a-result-on-the-incompleteness-of-...
overboard2•3h ago
Edit: do you mean literally impossible?
Maxatar•3h ago
As an analogy you could imagine trying to define the set of all animals with a bunch of rules... "1. Animals have DNA, 2. Animals ingest organic matter. 3. Animals have a nervous system. 4. ... etc..."
And this is true of all animals, but it will also be true of things that aren't animals as well, like slime molds which are not quite animals but very similar to them.
Okay so you keep adding more rules to narrow down your definition and stamp out slime molds, but you find some other thing satisfy that definition...
Now for animals maybe you can eventually have some very complex rule set that defines animals exactly and rules out all non-animals, but the principle is that this is not possible for natural numbers.
We can have rules like "0" is a natural number. For every natural number N there is a successor to it N + 1. If N + 1 = M + 1 then N = M. There is no natural number Q such that Q + 1 = 0.
Okay this is a good starting point... but just like with animals there are numbers that satisfy all of these rules but aren't natural numbers. You can keep adding more and more rules to try to stamp these numbers out, but no matter how hard, even if you add infinitely many rules, there will always be infinitely many numbers that satisfy your rules but aren't natural numbers.
In particular what you really want to say is that a natural number is finite, but no matter how hard you try there is no formal way to actually capture the concept of what it means to be finite in general so you end up with these mutant numbers that satisfy all of your rules but have infinitely many digits, and these are called non-standard natural numbers.
The reason non-standard natural numbers are a problem is because you might have a statement like "Every even integer greater than 2 can be written as the sum of two primes." and this statement might be true of the actual natural numbers but there might exist some freak mutant non-standard natural number for which it's not true. Unless your rules are able to stamp out these mutant non-standard natural numbers, then it is not possible to prove this statement, the statement becomes undecidable with respect to your rules. The only statements you can prove with respect to your rules are statements that are true of the real natural numbers as well as true of all the mutant natural numbers that your rules have not been able to stamp out.
So it's in this sense that I mean that it's not possible to specifically define the natural numbers. Any definition you come up with will also apply to mutant numbers, and these mutant numbers can get in the way of you proving things that are in principle true about the actual natural numbers.
gerdesj•2h ago
I've always had this feeling that the foundations (integers etc) are a bit dodgy in formal Maths but just as with say Civil Engineering, your world hasn't fallen apart for at least some days and it works. Famously, in Physics involving quantum: "Shut up and calculate".
Thankfully, in the real world I just have to make web pages, file shares and glittery unicorns available to the computers belonging to paying customers. Securely ...
The foundational aspect equivalent of integers in IT might be DNS. Fuck around with either and you come unstuck rather quickly without realising exactly why until you get suitably rigorous ...
I'm also a networking bod (with some jolly expensive test gear) but that might be compared to pencils and paper for Maths 8)
pja•3h ago
Are such objects not inevitably isomorphic to the natural numbers?
Can you give an example of a formal definition that leads to something that obviously isn't the same as the naturals?
btilly•3h ago
In that article you'll see references to "first order logic" and "second order logic". First order logic captures any possible finite chain of reasoning. Second order logic allows us to take logical steps that would require a potentially infinite amount of reasoning to do. Gödel's famous theorems were about the limitations of first order logic. While second order logic has no such limitations, it is also not something that humans can actually do. (We can reason about second order logic though.)
Anyways a nonstandard model of arithmetic can have all sorts of bizarre things. Such as a proof that Peano Axioms lead to a contradiction. While it might seem that this leads to a contradiction in the Peano Axioms, it doesn't because the "proof" is (from our point of view) infinitely long, and so not really a proof at all! (This is also why logicians have to draw a very careful distinction between "these axioms prove" and "these axioms prove that they prove"...)
john-h-k•3h ago
My understanding is you can specifically and formally define the natural numbers with addition and multiplication, although multiplication means the language is no longer decidable.
You can define natural numbers with just addition ( Presburger arithmetic ) and it’s decidable.
Im not sure how undecidable <=> “will define things that are similar to natural numbers but are not” but maybe I am missing something
Maxatar•2h ago
If a sentence S is undecidable from your axioms for the natural numbers then there are two models A and B satisfying those axioms where A satisfies S and B satisfies not S. So which one is the standard natural numbers, is it A or B?
Either A or B will be an example of something that satisfies your definition of natural numbers and yet is not the natural numbers.
sam_ezeh•2h ago
This isn't correct. This is only true for first-order theories of the natural numbers using the axiom schema of induction. Second-order Peano arithmetic with the full axiom of induction has the natural numbers as its only model. This property is called "categoricity" and you can find the proof here [1] if you're interested
[1]: https://builds.openlogicproject.org/content/second-order-log...
Maxatar•2h ago
You can adopt Henkin semantics to give the naturals an interpretation, which is still second order logic, but then you're back to lacking a categorical model of the naturals.
sam_ezeh•1h ago
Can you explain what you mean here? Full semantics for second-order logic has a unique interpretation i.e. the standard natural numbers
Maxatar•49m ago
Thus, although full second order Peano axioms are categorical, second order logic by itself never delivers a self‑contained model of the natural numbers. Any actual interpretation of the natural numbers in second order logic requires an infinite regress of background theories.
JadeNB•4h ago
This is true and compelling as things developed, but I think it's an explanation of where history brought us, rather than a logical inevitability. For example, I can easily imagine, in a different universe, teachers patiently explaining that we declare that the empty set is not a set, to avoid complicating theorems, proofs, and exposition by the endless repetition of "non-empty set."
(I agree that this is different, because there's no interesting "unique factorization theorem" for sets, but I can still imagine things developing this way. And, indeed, there are complications caused by allowing the empty set in a model of a structure, and someone determined to do so can make themselves pointlessly unpopular by asking "but have you considered the empty manifold?" and similar questions. See also https://mathoverflow.net/questions/45951/interesting-example....)
tux3•4h ago
That universe would be deprived from the bottomless wellspring of dryness that is the set theoretic foundations of mathematics. Unthinkable!
JadeNB•2h ago
"Wellspring of dryness" is quite a metaphor, and I take it from that metaphor that this outcome wouldn't much bother you. I'll put in a personal defense for set theory, but only an appeal to my personal taste, since I have no expert, and barely even an amateurish, knowledge of set theory beyond the elementary; but I'll also acknowledge that set-theoretic foundations are not to everyone's taste, and that someone who has an alternate foundational system that appeals to them is doing no harm to themselves or to me.
> That's an interesting thought, but I think that'd break the usual trick of building up objects from the empty set, a set containing the empty set, then the set containing both of those and so forth.
In this alternate universe, the ZF or ZFC axioms (where C becomes, of course, "the product of sets is a set") would certainly involve, not the axiom of the empty set, but rather some sort of "axioms of sets", declaring that there exists a set. Because it's not empty, this set has at least one element, which we may extract and use to make a one-element set. Now observe that all one-element sets are set-theoretically the same, and so may indifferently be denoted by *; and then charge ahead with the construction, using not Ø, Ø ∪ {Ø}, Ø ∪ {Ø} ∪ {Ø ∪ {Ø}}, etc. but *, * ∪ {*}, * ∪ {*} ∪ {* ∪ {*}}, etc. Then all that would be left would be to decide whether our natural numbers started at the cardinality 1 of *, or if we wanted natural numbers to count quantities 1 less than the cardinality of a set.
tux3•1h ago
Appreciate the defense of set theory, I can't find a problem with it!
gus_massa•4h ago
"The intersection of two sets is a set."
JadeNB•3h ago
> "The intersection of two sets is a set."
Many results in set theory, yes! (Or at least in elementary set theory. I'm not a set theorist by profession, so I can't speak to how often it arises in research-level set theory.) But, once one leaves set theory, the empty set can cause problems. For the first example that springs to mind, it is a cute result that, if a set S has a binary operation * such that, for every pair of elements a, b in S, there is a unique solution x to a*x = b, and a unique solution y to y*a = b, then * makes S a group ... unless S is empty!
In fact, on second thought, even in set theory, there are things like: the definition of a partial order being a well ordering would become simpler to state if the empty set were disallowed; and the axiom of choice would become just the statement that the product of sets is a set! I'm sure that I could come up with more examples where allowing empty sets complicates things, just as you could come up with more examples where it simplifies them. That there is no unambiguous answer one direction or the other is why I believe this alternate universe could exist, but we're not in it!
chongli•1h ago
The same is true for any structure which posits the existence of some element. Of course it cannot be the empty set.
JadeNB•3m ago
It's not necessarily a problem that the empty set cannot be a group. (Although the only reason that it cannot is a definition, and, similarly, the definition of a field requires two distinct elements, which hasn't stopped some people from positing that it is a problem that there is then no field with one element.)
The problem is that there's a natural property of magmas (sets with binary operation), namely the uniquely solvability condition I mentioned, that characterizes "group or the empty set," which is more awkward than just characterizing groups. See https://golem.ph.utexas.edu/category/2020/08/the_group_with_..., which may be where I first learned this result.
murderfs•3h ago
mathgeek•2h ago
Since both the inclusion and exclusion of zero are accepted definitions depending on who’s asking, books usually just pick one or define two sets (commonly denoted as N_0 and N_1). Different topics benefit from using one set over the other, as well as having to deal with division by zero, etc. Number theory tends to exclude zero.
JadeNB•2h ago
Oh my, it had never occurred to me that one could disagree, not just about whether the natural numbers include 0 or don't, but also about how to denote "natural numbers with 0" and "natural numbers without." Personally, I'm a fan of Z_{\ge 0} and Z_{> 0}, which are a little ugly but which any mathematician, regardless of their preferred conventions, can read and understand without further explanation.
dullcrisp•8m ago
JadeNB•4m ago
It probably doesn't, but, if you want to allow negative numbers, then addition is partial unless you have 0. It's perfectly reasonable to disallow negative numbers—historically, negative numbers had to be explicitly allowed, not explicitly disallowed—but it does mean that subtraction becomes a partial operation or, phrased equivalently but perhaps more compellingly, that we have to give up on solving simple equations for x like x + 2 = 1.
tikhonj•4h ago
jordigh•3h ago
So many theorems have to say, "for every odd prime..."
https://math.stackexchange.com/questions/1177104/what-is-an-...
kordlessagain•3h ago
aleph_minus_one•2h ago
This does hold in the ring Z. In the ring Z[i], 2 = (1+i)*(1-i), and the two factors are prime elements.
brennopost•3h ago
chrismcb•2h ago
arcastroe•1h ago
bluepnume•54m ago
"Even" just means "divisible by 2"
"2 is the only prime that is divisible by 2" "3 is the only prime that is divisible by 3" "5 is the only prime that is divisible by 5"
...
"N is the only prime that is divisible by N"