* empty products being 1 of course
Of course, I generally hear the fundamental theorem of arithmetic phrased as “every integer greater than one…” which is making its own little special case for the number 1.
Only the contrary: it is extremely inconvenient to not allow the product of an empty sequence of numbers to equal 1. The sum of an empty sequence is 0. The Baz of an empty sequence of numbers, for any monoid Baz, is the identity element of that monoid. Any other convention is going to be very painful and full of its own exceptions.
There are no exceptions to any rules here. 1 is not prime. Every positive integer can be expressed as the unique product of powers of primes. 1's expression is [], or 0000..., or ∅.
I meant that it’s inconvenient to require engaging with that concept directly in the everyday definition of prime numbers.
And note that this convention is not at all required for the point I'm making regarding prime numbers. As you say yourself, restrict the theorem to integers greater than 1, and you can forget about empty products (and it is still easier to state if 1 is not prime (which it isn't)).
Thay is enough justification for me of 1 not being prime. It has a factorisation!
And then C/C++ compilers are subtly inconsistent. If 0 is valid index, then null should correspond to uintptr_t(-1), not 0 address. That lead to non-trivial complication in OS implementations to make sure that the address 0 is not mapped as from hardware point of view 0 is absolutely normal address.
In the same way we index from 0 because indexing gets way more awkward if we index from 1.
In-band sentinels are both quite rare, and also equally convenient with -1 or 0. In fact I would say -1 is a bit more elegant because sometimes you need multiple sentinel values and then you can easily use -2 (what are you going to use 0 and 1 and then index from 2?).
The more common operations are things like indexing into flattened multidimensional arrays, or dealing with intervals, which are both way more elegant with 0-based indexing.
0 is a valid index into an array. It's even a valid index into global memory in some environments. Not mapping memory to address 0 is completely trivial. I'm not sure what non-trivial complications you're thinking of.
Depending on your definition of divisor, it excludes everything except 1 and -1, whose two integer divisors are 1 and -1. But then, if you specify that "divisor" means "positive integer divisor", it no longer automatically excludes the negative numbers, since the two positive integer divisors of -2 are 1 and 2. (Incidentally, plenty of algebraists, myself included, are perfectly comfortable with including -2 as a prime.)
At that time we can determine if 1 is prime.
If it’s found that Eratosthenes’ sieve is the only prime generating function then we have our answer.
Namely, if the sieve is the only generating function for all of the primes then 1 would need to be omitted as prime as removing its factors would remove every number, thus failing to generate the list of primes.
Nowadays 2 is considered prime. Seems silly to question why someone is claiming 2 is prime if that is how it is defined in modern day.
> What makes you think two is prime
The current mathematical definition of a prime number
I will admit that for me it was being brainwashed through years of high school and university mathematics.
0^0 = 1? Yes, it’s simpler that way.
0! = 1? Yes, it’s simpler that way.
0/0 = ∞? No, it’s undefined.
0.9999… = 1? Yes, it’s just two ways of expressing the same number.
1+2+3+… = -1/12? No, but if it did have a finite value, that’s what it would be.
Or even more abstract "every element on y". Which I think could sort of work
The other ones, sure, but I'm not following this one.
If I have inf*k = inf,and dvide both sides by inf... ( The misuse) Then 1 = any K including 1/12. Now this is useless in calculus and number theory, but in quantium field theory it is a useful tool.
So inf = 1/12 and a non convergent series = a constant, but you have misused dividing infinity by itself to get it.
Infinity for division? It's useful, like counting chickens starting at zero. L'Hoptals rule is a very useful tool, but do not misuse it.
More a question of place-value representation systems than what most people are thinking of which is 1 - ε.
A slightly facetious answer might be that this is the wrong question to ask, and the right question is: when did 1 stop being a prime number? To which the answer is: some time between 1933 (when the 6th edition of Hardy's _A course in pure mathematics_ was published) and 1938 (when the 7th edition was published).
We only notice the case for 2 because our human languages happen to define divisible-by-2 as a word and concept. If our languages called divisible-by-3 "treven" or something like that, we'd think it weird that 3 was the only treven prime.
This is not one of those times.
But also mentioned elsewhere in the thread: if we declared 1 to be a prime, then many (I daresay "most") of our theorems would have to change "prime number" to "prime number greater than one".
1 = () = (0, 0, 0, 0, 0, ...)
2 = (1) = (1, 0, 0, 0, 0, ...)
3 = (0, 1)
4 = (2)
5 = (0, 0, 1)
6 = (1, 1)
7 = (0, 0, 0, 1)
8 = (3)
9 = (0, 2)
10 = (1, 0, 1)
The i th position in every tuple is the power of the i th prime in the factorization of that number. So 10 = (1, 0, 1) = 2^1 * 3^0 * 5^1. 84 would be (2, 1, 0, 1) = 2^2 * 3^1 * 5^0 * 7^1. If we have unique factorization, there is exactly one way to write every positive integer like this, and there are many insights we can gain from this factorization. If 1 is prime, then we can write 6 = 1^257 * 2^1 * 3^1, or any other power of 1 we like. We just gain nothing from it.There are often many equivalent ways to define any mathematical object, and I'm sure there are plenty of ways to define a prime number other than "its only factors are itself and 1". These other definitions are likely to obviously exclude 1. One obvious one is the set of basis coordinates in this "unique factorization" space that I just laid out here. And we're never really excluding or making a special case for 1, because 1's factorization is simply the absence of any powers -- empty set, all 0s, whatever you want to call it.
Keep in mind that "unique factorization" turns out to be very interesting in all sorts of other mathematical objects: rings, polynomials, symmetries, vector spaces, etc. They often have their own notion of "prime" or "primitive" objects and the correspondence with integer-primes is much cleaner if we don't consider 1 prime.
We could declare 4 to be a prime number, and keep the rest of the definition the same. Instead of just saying "no", you could ask, "okay, what would that do for us?" If there isn't a good answer, then what's the point? And usually, you're not in the 1% of 1% of 1%.
1, 2 and 3 are kind of special to me. In prime distribution studies, I discovered that they are special. It gets easier for some things if you consider primes only higher or equal to 5. Explaining distribution gets easier, some proofs become more obvious if you do that (tiny example: draw a ulam-like spiral around the numbers of an analog clock. 2 and 3 will become outliers and a distribution will reveal itself along the 1, 5, 7 and 11 diagonals).
Anyways, "only divisible by itself and 1" is a darn elegant definition.
I am fascinated by geometric proofs though. The clock thing is just a riff on Ulam's work. I believe there is more to it if one sees it as a geometric object and not just a visualization drawing. I could be wrong though.
I noticed the same as you, and IIRC the (some?) ancient greeks actually had an idea about 1 as not a number, but the unit that numbers were made of. So in a different class.
2 and 3 are also different, or rather all other primes from 5 and up are neighbours to a multiple of 6, (though not all such neighbours are primes of course).
In base-6 all those primes end in 5 or 1. What is the significance? I don't know. I remember that I started thinking that 2*3=6, maybe the sequence of primes is a result of the intertwining of numbersystems in multiple dimensions or whatever? Then I started thinking about the late republic instead. ;)
In two dimensions is easier.
I cannot rearrange one pebble.
I can rearrange two or three pebbles equidistant from each other in just one distinct way (inverting the position of a neighbouring pebble).
And so on...
There are many ways to think of natural numbers without actual numbers.
But thanks anyway! I learned a thing.
>>> def product(ints):
... result = 1
... for int in ints:
... result *= int
... return result
In which case there is no need to make 1 a prime as you already have: >>> product([])
1
1 x 1 x 1 = 1
...
Not prime!
This is debunked by https://ncatlab.org/nlab/show/too+simple+to+be+simple#relati...
Axioms are arbitrary. Use the axioms that are the most useful.
The question of whether or not the integer 1 is a prime doesn't make sense. The question is is it useful to define it as such and the answer is a resounding no.
If you really want to go down the road of solipsism, read Karl Popper.
People "axiom" their way out of 1+1=2 in this way: by changing the axioms, they change the topic, so they change the conclusion. I observe this pattern in disagreements very often.
Should we use a space-time manifold, or separate space and time dimensions? Do future objects exist, and do past objects exist? Do statements about the future have a definite truth value? Does Searle's Chinese Room think? Which Ship of Theseus is the original: the slowly replaced ship, or the ship rebuilt from the original parts?
I find that so many philosophy debates actually argue over definitions rather than practical matters, because definitions do matter. Well, add your own fun definition questions!
Now I'm no crackpot numerologist, adding up the numerical values of Bill Gates' name, or telling you who shot JFK. But I can tell you that the main launch pad 39A at Cape Kennedy was not numbered by accident -- look it up in the Book of Psalms. And it's interesting how the city buses around here are numbered. For example, the 68xx series; I look up Psalm 68 and I can definitely imagine the bus singing that as it lumbers down the road -- can't you?
Back to primes -- if we consider the top numbers authorities of our times, such as the US Post Office, city planners, and the telephone company (circa 1970s). I ran a chunk of ZIP codes from Southern California and discovered that some are the factors of two quite large prime numbers. Others yield interesting factors. Once again I pull out my Book of Psalms.
There are plenty of other "hermeneutics" to interpret assigned numbers, especially street addresses. And as for phone numbers, I've gone back to figuring out "what do they spell" on a standard TouchTone keypad, because sometimes it's quite informative.
It's no accident, for example, that the hospital where I was born is located at 4077 5th Avenue. And that number assigned by city planners, many decades before M*A*S*H was written or went on TV. Significant nonetheless.
I also figured out a few prime numbers related to my own life, and others that are recurring tropes, just cropping up at interesting times. What's your social security number? Have you sort of broken it down and pondered if those numbers turned up again and again in your life? Every time I see a number now, I'm compulsively factoring it out in my head. Is it prime? It feels prime. I'll check it in the app later; try some mental math for now.
I'm also counting things more often now. How many spokes in a wheel? How many petals in a flower, especially a flower depicted in art. How many brick courses in that interesting wall they built? Plug any interesting numbers back into the divisors app. Finding the primes, find the factors, just ponder numeric coincidences. It's fun. So many signs and signals, hidden in plain sight before us. Buses singing Psalm 68 as they take on passengers. Launch pads singing Psalm 39 as Europa Clipper slips the surly bonds of Earth. What's on your telephone dial?
JJMcJ•2mo ago
reaperman•2mo ago
gerdesj•2mo ago
"...ignoring the trivial case of 1 being an obvious factor of every integer."
I remember quite a big chunk of GEB formally defining how integers are really not trivial! The main problem seems to be is that you soon end up with circular reasoning if you are not razor sharp with your definitions. That's just in an explainer book 8)
Then you have to define what factor means ...
Maxatar•2mo ago
stouset•2mo ago
gerdesj•2mo ago
That's the GEB I mentioned above.
pinkmuffinere•2mo ago
gerdesj•2mo ago
Move GEB up the reading list right now! The edition I initially read was hard bound and was quite worn. I bought and read it again about 20 years ago and found more treasures.
It is a proper nerd grade treatise for non experts who are interested in maths, music and art. Really: maths, music and art from a mostly mathematical perspective. Hofstadter's writing style is very easy going and he is a master of clarity without complexity.
I don't think you need any more Maths than you would get up to age 18 or so at school to understand the entire book and probably less. Even if you gloss the formal Maths the book still works.
Maxatar•2mo ago
https://risingentropy.com/a-result-on-the-incompleteness-of-...
ForOldHack•2mo ago
Propositional Calulus will teach you to think in symbols you cannot even fathom. This alone is worth every minute reading the book.
Every few years I reread it, and get a new sense of solving problems. The book can be divided into parts... But the whole...
overboard2•2mo ago
Edit: do you mean literally impossible?
Maxatar•2mo ago
As an analogy you could imagine trying to define the set of all animals with a bunch of rules... "1. Animals have DNA, 2. Animals ingest organic matter. 3. Animals have a nervous system. 4. ... etc..."
And this is true of all animals, but it will also be true of things that aren't animals as well, like slime molds which are not quite animals but very similar to them.
Okay so you keep adding more rules to narrow down your definition and stamp out slime molds, but you find some other thing satisfy that definition...
Now for animals maybe you can eventually have some very complex rule set that defines animals exactly and rules out all non-animals, but the principle is that this is not possible for natural numbers.
We can have rules like "0" is a natural number. For every natural number N there is a successor to it N + 1. If N + 1 = M + 1 then N = M. There is no natural number Q such that Q + 1 = 0.
Okay this is a good starting point... but just like with animals there are numbers that satisfy all of these rules but aren't natural numbers. You can keep adding more and more rules to try to stamp these numbers out, but no matter how hard, even if you add infinitely many rules, there will always be infinitely many numbers that satisfy your rules but aren't natural numbers.
In particular what you really want to say is that a natural number is finite, but no matter how hard you try there is no formal way to actually capture the concept of what it means to be finite in general so you end up with these mutant numbers that satisfy all of your rules but have infinitely many digits, and these are called non-standard natural numbers.
The reason non-standard natural numbers are a problem is because you might have a statement like "Every even integer greater than 2 can be written as the sum of two primes." and this statement might be true of the actual natural numbers but there might exist some freak mutant non-standard natural number for which it's not true. Unless your rules are able to stamp out these mutant non-standard natural numbers, then it is not possible to prove this statement, the statement becomes undecidable with respect to your rules. The only statements you can prove with respect to your rules are statements that are true of the real natural numbers as well as true of all the mutant natural numbers that your rules have not been able to stamp out.
So it's in this sense that I mean that it's not possible to specifically define the natural numbers. Any definition you come up with will also apply to mutant numbers, and these mutant numbers can get in the way of you proving things that are in principle true about the actual natural numbers.
gerdesj•2mo ago
I've always had this feeling that the foundations (integers etc) are a bit dodgy in formal Maths but just as with say Civil Engineering, your world hasn't fallen apart for at least some days and it works. Famously, in Physics involving quantum: "Shut up and calculate".
Thankfully, in the real world I just have to make web pages, file shares and glittery unicorns available to the computers belonging to paying customers. Securely ...
The foundational aspect equivalent of integers in IT might be DNS. Fuck around with either and you come unstuck rather quickly without realising exactly why until you get suitably rigorous ...
I'm also a networking bod (with some jolly expensive test gear) but that might be compared to pencils and paper for Maths 8)
pja•2mo ago
Are such objects not inevitably isomorphic to the natural numbers?
Can you give an example of a formal definition that leads to something that obviously isn't the same as the naturals?
btilly•2mo ago
In that article you'll see references to "first order logic" and "second order logic". First order logic captures any possible finite chain of reasoning. Second order logic allows us to take logical steps that would require a potentially infinite amount of reasoning to do. Gödel's famous theorems were about the limitations of first order logic. While second order logic has no such limitations, it is also not something that humans can actually do. (We can reason about second order logic though.)
Anyways a nonstandard model of arithmetic can have all sorts of bizarre things. Such as a proof that Peano Axioms lead to a contradiction. While it might seem that this leads to a contradiction in the Peano Axioms, it doesn't because the "proof" is (from our point of view) infinitely long, and so not really a proof at all! (This is also why logicians have to draw a very careful distinction between "these axioms prove" and "these axioms prove that they prove"...)
pja•2mo ago
If you (for example) extend Peano numbers with extra axioms that state things like “hey, here are some hyperreals” or “this Goedel sentence is explicitly defined to be true (or false)” it’s unsurprising that you can end up in some weird places.
btilly•2mo ago
Furthermore, it is possible to construct nonstandard models such that every statement that is true in our model, remains true in that one, and ditto for every statement that is false. They really look identical to our model, except that we know from construction that they aren't. This fact is what makes the transfer principle work in nonstandard analysis, and the ultrapower construction shows how to do it.
(My snark about NSA is that we shouldn't need the axiom of choice to find the derivative of x^2. But I do find it an interesting approach to know about.)
Maxatar•2mo ago
john-h-k•2mo ago
My understanding is you can specifically and formally define the natural numbers with addition and multiplication, although multiplication means the language is no longer decidable.
You can define natural numbers with just addition ( Presburger arithmetic ) and it’s decidable.
Im not sure how undecidable <=> “will define things that are similar to natural numbers but are not” but maybe I am missing something
Maxatar•2mo ago
If a sentence S is undecidable from your axioms for the natural numbers then there are two models A and B satisfying those axioms where A satisfies S and B satisfies not S. So which one is the standard natural numbers, is it A or B?
Either A or B will be an example of something that satisfies your definition of natural numbers and yet is not the natural numbers.
sam_ezeh•2mo ago
This isn't correct. This is only true for first-order theories of the natural numbers using the axiom schema of induction. Second-order Peano arithmetic with the full axiom of induction has the natural numbers as its only model. This property is called "categoricity" and you can find the proof here [1] if you're interested
[1]: https://builds.openlogicproject.org/content/second-order-log...
Maxatar•2mo ago
You can adopt Henkin semantics to give the naturals an interpretation, which is still second order logic, but then you're back to lacking a categorical model of the naturals.
sam_ezeh•2mo ago
Can you explain what you mean here? Full semantics for second-order logic has a unique interpretation i.e. the standard natural numbers
Maxatar•2mo ago
Thus, although full second order Peano axioms are categorical, second order logic by itself never delivers a self‑contained model of the natural numbers. Any actual interpretation of the natural numbers in second order logic requires an infinite regress of background theories.
LudwigNagasena•2mo ago
drdeca•2mo ago
Specifically, the case of the divisor being 1.
seanhunter•2mo ago
JadeNB•2mo ago
This is true and compelling as things developed, but I think it's an explanation of where history brought us, rather than a logical inevitability. For example, I can easily imagine, in a different universe, teachers patiently explaining that we declare that the empty set is not a set, to avoid complicating theorems, proofs, and exposition by the endless repetition of "non-empty set."
(I agree that this is different, because there's no interesting "unique factorization theorem" for sets, but I can still imagine things developing this way. And, indeed, there are complications caused by allowing the empty set in a model of a structure, and someone determined to do so can make themselves pointlessly unpopular by asking "but have you considered the empty manifold?" and similar questions. See also https://mathoverflow.net/questions/45951/interesting-example....)
tux3•2mo ago
That universe would be deprived from the bottomless wellspring of dryness that is the set theoretic foundations of mathematics. Unthinkable!
JadeNB•2mo ago
"Wellspring of dryness" is quite a metaphor, and I take it from that metaphor that this outcome wouldn't much bother you. I'll put in a personal defense for set theory, but only an appeal to my personal taste, since I have no expert, and barely even an amateurish, knowledge of set theory beyond the elementary; but I'll also acknowledge that set-theoretic foundations are not to everyone's taste, and that someone who has an alternate foundational system that appeals to them is doing no harm to themselves or to me.
> That's an interesting thought, but I think that'd break the usual trick of building up objects from the empty set, a set containing the empty set, then the set containing both of those and so forth.
In this alternate universe, the ZF or ZFC axioms (where C becomes, of course, "the product of sets is a set") would certainly involve, not the axiom of the empty set, but rather some sort of "axioms of sets", declaring that there exists a set. Because it's not empty, this set has at least one element, which we may extract and use to make a one-element set. Now observe that all one-element sets are set-theoretically the same, and so may indifferently be denoted by *; and then charge ahead with the construction, using not Ø, Ø ∪ {Ø}, Ø ∪ {Ø} ∪ {Ø ∪ {Ø}}, etc. but *, * ∪ {*}, * ∪ {*} ∪ {* ∪ {*}}, etc. Then all that would be left would be to decide whether our natural numbers started at the cardinality 1 of *, or if we wanted natural numbers to count quantities 1 less than the cardinality of a set.
tux3•2mo ago
Appreciate the defense of set theory, I can't find a problem with it!
JadeNB•2mo ago
gus_massa•2mo ago
"The intersection of two sets is a set."
JadeNB•2mo ago
> "The intersection of two sets is a set."
Many results in set theory, yes! (Or at least in elementary set theory. I'm not a set theorist by profession, so I can't speak to how often it arises in research-level set theory.) But, once one leaves set theory, the empty set can cause problems. For the first example that springs to mind, it is a cute result that, if a set S has a binary operation * such that, for every pair of elements a, b in S, there is a unique solution x to a*x = b, and a unique solution y to y*a = b, then * makes S a group ... unless S is empty!
In fact, on second thought, even in set theory, there are things like: the definition of a partial order being a well ordering would become simpler to state if the empty set were disallowed; and the axiom of choice would become just the statement that the product of sets is a set! I'm sure that I could come up with more examples where allowing empty sets complicates things, just as you could come up with more examples where it simplifies them. That there is no unambiguous answer one direction or the other is why I believe this alternate universe could exist, but we're not in it!
chongli•2mo ago
The same is true for any structure which posits the existence of some element. Of course it cannot be the empty set.
JadeNB•2mo ago
It's not necessarily a problem that the empty set cannot be a group. (Although the only reason that it cannot is a definition, and, similarly, the definition of a field requires two distinct elements, which hasn't stopped some people from positing that it is a problem that there is then no field with one element.)
The problem is that there's a natural property of magmas (sets with binary operation), namely the uniquely solvability condition I mentioned, that characterizes "group or the empty set," which is more awkward than just characterizing groups. Or you may argue, fairly, that that's not a problem, but it is certainly an example where allowing the empty set to be a set complicates statements, which is all that I was meaning to illustrate. Hopefully obviously, without meaning seriously to suggest that the empty set shouldn't be a set.
(I remembered in the course of drafting this comment that https://golem.ph.utexas.edu/category/2020/08/the_group_with_... discusses, far more entertainingly and insightfully than I do, the characterization that I mention, and may have been where I learned it.)
chongli•2mo ago
In an alternative axiomatization (without the empty set) you’re going to need to create some special element which belongs to every set and then your definition of disjoint sets is that their intersection is equal to the trivial set containing only the special element. What a clumsy hack that would be!
JadeNB•2mo ago
You certainly can do that, but it's not the only way. Even in this universe, I would expect to show that concrete sets A and B are disjoint by showing x ∈ A → x ∉ B, which makes perfect sense even without an empty set.
> In an alternative axiomatization (without the empty set) you’re going to need to create some special element which belongs to every set and then your definition of disjoint sets is that their intersection is equal to the trivial set containing only the special element. What a clumsy hack that would be!
Rather, in this alternate universe, intersection is partially defined. Again, even in this universe, we're used to accepting some operations being partial!
chongli•2mo ago
Yes, but then topology becomes a very tedious exercise because so many proofs rely on the fact that the empty set is contained in every topology, that the empty set is both closed and open, and that intersections frequently yield the empty set. With partially defined intersection you're forced to specially handle every case where two sets might be disjoint.
JadeNB•2mo ago
Certainly this would be a good objection if I proposed to get rid of empty sets in our universe. (I don't!) But an alternate universe that developed this way would have either just accepted that topology was an inherently ugly subject, or worked out some equivalent workaround (for example, with testing topologies by {0, 1}-valued functions, of which we can take maxima and minima to simulate unions and intersections without worrying about the possibility of an intersection being empty), or else come up with some other approach entirely. (There is, after all, nothing sacred about a topology being specified by its open sets; see the discussion at https://mathoverflow.net/questions/19152/why-is-a-topology-m.... That's how history shook out for us, but it's hardly an inevitable concept except for those of us who have already learned to think about things that way.)
I am not claiming that this would be an improvement (my suspicion is that it would be an improvement in some ways and a regression in others), just that I think that it is not unimaginable that history could have developed this way. It would not then have seemed that the definitions and theorems were artificially avoiding the concept of an empty set, because the mathematical thought of the humans who make those definitions and theorems would simply not think of the empty set as a thing, and so would naturally have taken what seem to us like circuitous tours around it. Just as, surely, there are circuitous tours that we take in our universe, that could be made more direct if we only phrased our reasoning in terms of ... well, who knows? If I knew, then that's the math that I'd be doing, and indeed I see much of the research I do as attempting to discover the "right" direct path to the conclusion, whether or not it's the approach that fits in best with the prevailing thought.
chongli•2mo ago
I know that at one time we did mathematics without the number zero and that its introduction was a profound (and controversial) change. The empty set seems like a perfectly natural extension of zero as a concept. Perhaps the universe with no empty set also has no zero? Would be very interesting to see how mathematics would develop without either construct.
murderfs•2mo ago
mathgeek•2mo ago
Since both the inclusion and exclusion of zero are accepted definitions depending on who’s asking, books usually just pick one or define two sets (commonly denoted as N_0 and N_1). Different topics benefit from using one set over the other, as well as having to deal with division by zero, etc. Number theory tends to exclude zero.
JadeNB•2mo ago
Oh my, it had never occurred to me that one could disagree, not just about whether the natural numbers include 0 or don't, but also about how to denote "natural numbers with 0" and "natural numbers without." Personally, I'm a fan of Z_{\ge 0} and Z_{> 0}, which are a little ugly but which any mathematician, regardless of their preferred conventions, can read and understand without further explanation.
mathgeek•2mo ago
ForOldHack•2mo ago
I am totally assuming you knew this already.
mathgeek•2mo ago
dullcrisp•2mo ago
JadeNB•2mo ago
It probably doesn't, but, if you want to allow negative numbers, then addition is partial unless you have 0. It's perfectly reasonable to disallow negative numbers—historically, negative numbers had to be explicitly allowed, not explicitly disallowed—but it does mean that subtraction becomes a partial operation or, phrased equivalently but perhaps more compellingly, that we have to give up on solving simple equations for x like x + 2 = 1.
dullcrisp•2mo ago
I might be reading too much into what you’re saying about the empty set though and you just mean we could use the word “set” to mean “non-empty set” and then say something like “set-theoretic set” to mean what we now mean when we say “set.” But that sounds like a mouthful.
JadeNB•2mo ago
Good point!
> I don’t have anything against them personally but they’re probably less natural than the empty set being a set.
An interesting idea, which history supports: 0 was considered as a number before negative numbers were, and we still usually consider only "natural sets" and not "negative sets" (except for Schanuel: https://doi.org/10.1007/BFb0084232).
> I might be reading too much into what you’re saying about the empty set though and you just mean we could use the word “set” to mean non-empty set and then say something like “set-theoretic set” to mean what we now mean when we say “set.”
Right, or a different word entirely, just like we refer to 1 only as a number that's not prime, not as a "number-theoretic prime." But, anyway, the analogy was just the first one that sprang to mind; it doubtless has many infelicities that could be improved by a better analogy, if it's not just a worthless idea overall.
dullcrisp•2mo ago
tikhonj•2mo ago
jordigh•2mo ago
So many theorems have to say, "for every odd prime..."
https://math.stackexchange.com/questions/1177104/what-is-an-...
kordlessagain•2mo ago
aleph_minus_one•2mo ago
This does hold in the ring Z. In the ring Z[i], 2 = (1+i)*(1-i), and the two factors are prime elements.
brennopost•2mo ago
chrismcb•2mo ago
arcastroe•2mo ago
bluepnume•2mo ago
"Even" just means "divisible by 2"
"2 is the only prime that is divisible by 2" "3 is the only prime that is divisible by 3" "5 is the only prime that is divisible by 5"
...
"N is the only prime that is divisible by N"
thehappypm•2mo ago
evanb•2mo ago
latexr•2mo ago
mystified5016•2mo ago