frontpage.
newsnewestaskshowjobs

Made with ♥ by @iamnishanth

Open Source @Github

fp.

We Mourn Our Craft

https://nolanlawson.com/2026/02/07/we-mourn-our-craft/
109•ColinWright•1h ago•81 comments

Speed up responses with fast mode

https://code.claude.com/docs/en/fast-mode
22•surprisetalk•1h ago•21 comments

U.S. Jobs Disappear at Fastest January Pace Since Great Recession

https://www.forbes.com/sites/mikestunson/2026/02/05/us-jobs-disappear-at-fastest-january-pace-sin...
118•alephnerd•2h ago•74 comments

Hoot: Scheme on WebAssembly

https://www.spritely.institute/hoot/
121•AlexeyBrin•7h ago•24 comments

Stories from 25 Years of Software Development

https://susam.net/twenty-five-years-of-computing.html
61•vinhnx•5h ago•7 comments

OpenCiv3: Open-source, cross-platform reimagining of Civilization III

https://openciv3.org/
827•klaussilveira•21h ago•248 comments

Al Lowe on model trains, funny deaths and working with Disney

https://spillhistorie.no/2026/02/06/interview-with-sierra-veteran-al-lowe/
55•thelok•3h ago•7 comments

Brookhaven Lab's RHIC Concludes 25-Year Run with Final Collisions

https://www.hpcwire.com/off-the-wire/brookhaven-labs-rhic-concludes-25-year-run-with-final-collis...
3•gnufx•37m ago•0 comments

The AI boom is causing shortages everywhere else

https://www.washingtonpost.com/technology/2026/02/07/ai-spending-economy-shortages/
108•1vuio0pswjnm7•8h ago•136 comments

The Waymo World Model

https://waymo.com/blog/2026/02/the-waymo-world-model-a-new-frontier-for-autonomous-driving-simula...
1058•xnx•1d ago•611 comments

Reinforcement Learning from Human Feedback

https://rlhfbook.com/
76•onurkanbkrc•6h ago•5 comments

I Write Games in C (yes, C)

https://jonathanwhiting.com/writing/blog/games_in_c/
8•valyala•1h ago•1 comments

Start all of your commands with a comma (2009)

https://rhodesmill.org/brandon/2009/commands-with-comma/
483•theblazehen•2d ago•175 comments

SectorC: A C Compiler in 512 bytes

https://xorvoid.com/sectorc.html
7•valyala•2h ago•0 comments

Vocal Guide – belt sing without killing yourself

https://jesperordrup.github.io/vocal-guide/
209•jesperordrup•12h ago•70 comments

France's homegrown open source online office suite

https://github.com/suitenumerique
556•nar001•6h ago•256 comments

Coding agents have replaced every framework I used

https://blog.alaindichiappari.dev/p/software-engineering-is-back
222•alainrk•6h ago•343 comments

A Fresh Look at IBM 3270 Information Display System

https://www.rs-online.com/designspark/a-fresh-look-at-ibm-3270-information-display-system
36•rbanffy•4d ago•7 comments

Selection Rather Than Prediction

https://voratiq.com/blog/selection-rather-than-prediction/
8•languid-photic•3d ago•1 comments

History and Timeline of the Proco Rat Pedal (2021)

https://web.archive.org/web/20211030011207/https://thejhsshow.com/articles/history-and-timeline-o...
19•brudgers•5d ago•4 comments

72M Points of Interest

https://tech.marksblogg.com/overture-places-pois.html
29•marklit•5d ago•2 comments

Unseen Footage of Atari Battlezone Arcade Cabinet Production

https://arcadeblogger.com/2026/02/02/unseen-footage-of-atari-battlezone-cabinet-production/
114•videotopia•4d ago•31 comments

Show HN: I saw this cool navigation reveal, so I made a simple HTML+CSS version

https://github.com/Momciloo/fun-with-clip-path
5•momciloo•1h ago•0 comments

Where did all the starships go?

https://www.datawrapper.de/blog/science-fiction-decline
76•speckx•4d ago•75 comments

Show HN: Look Ma, No Linux: Shell, App Installer, Vi, Cc on ESP32-S3 / BreezyBox

https://github.com/valdanylchuk/breezydemo
273•isitcontent•22h ago•38 comments

Show HN: Kappal – CLI to Run Docker Compose YML on Kubernetes for Local Dev

https://github.com/sandys/kappal
22•sandGorgon•2d ago•11 comments

Learning from context is harder than we thought

https://hy.tencent.com/research/100025?langVersion=en
201•limoce•4d ago•111 comments

Monty: A minimal, secure Python interpreter written in Rust for use by AI

https://github.com/pydantic/monty
286•dmpetrov•22h ago•153 comments

Making geo joins faster with H3 indexes

https://floedb.ai/blog/how-we-made-geo-joins-400-faster-with-h3-indexes
155•matheusalmeida•2d ago•48 comments

Software factories and the agentic moment

https://factory.strongdm.ai/
71•mellosouls•4h ago•75 comments
Open in hackernews

Cracovians: The Twisted Twins of Matrices

https://marcinciura.wordpress.com/2025/06/20/cracovians-the-twisted-twins-of-matrices/
81•mci•7mo ago

Comments

kubb•7mo ago
> However, the multiplication of a cracovian by another cracovian is defined differently: the result of multiplying an element from column i of the left cracovian by an element from column j of the right cracovian is a term of the sum in column i and row j of the result.

Am I the only one for whom this crucial explanation didn’t click? Admittedly, I might be stupid.

Wikipedia is a bit more understandable: „The Cracovian product of two matrices, say A and B, is defined by A ∧ B = (B^T)A

tempodox•7mo ago
You're not the only one. That “explanation” is just really bad.
tgv•7mo ago
It's the crucial part, and even with the example, I couldn't understand it. Like I can't understand why the second column in the first matrix doesn't have signs. Or why the 0 in the result matrix is negative.

But in another link I found that it's column by column multiplication. So A × B = C, then C[i][j] = sum(A[k][i] * B[k][j]). Unfortunately, the example doesn't match that definition...

burnished•7mo ago
No, I think it is too ambiguous to be useful. The example wasn't helpful either, I think they needed to perform the individual calculations for clarity.
kubb•7mo ago
Yeah, usually you name the matrix elements a, b, c, d, etc. and write out the formula for the elements of the result.
AdamH12113•7mo ago
The example is simply wrong, according to other sources. This along with the inconsistent formatting makes me wonder if it was written by an LLM. It's a shame; this seems like an interesting topic.
andrewla•7mo ago
Agreed -- "is a term of the sum" is such an inverted way to look at it.

Better I think would be to say "the result in column i and row j is the sum of product of elements in column i of the left cracovian and column j of the right cracovian".

And even by this definition the example given doesn't seem to track (and the strangeness of sometimes saying "+" and sometimes not, and having both "0" and "-0" in the example is bananas!):

   {  3  2 } {  1  -4 }  =   {  5   -2 }
   { -1  0 } { -2   3 }  =   {  0    2 }


   3 * 1 + -1 * -2 == 5 -- check
   3 * -4 + -1 * 3 == -15 -- what?
   2 * 1 + 0 * -2 == 2 (okay, but shouldn't this be in the lower left, column 1 dotted with column 2?)
   2 * -4 + 0 * 3 = -8 (now I'm really missing something)
mci•7mo ago
Thanks for the feedback, everyone. I pasted my Polish text into Gemini to translate it into English. Gemini hallucinated the translation of this example. Now it should be OK.
pomian•7mo ago
Even in Polish, this comes out Greek to me.
pomian•7mo ago
I mean, it makes some sort of visual sense, but can't grasp the results from the matrices shown.
mci•7mo ago
I took the liberty to replace my awkward wording with your "the result in column i and row j is the sum of product of elements in column i of the left cracovian and column j of the right cracovian". Hope you don't mind. Thanks!
fxj•7mo ago
I didnt get the explanation of the multiplication. After reading the wikipedia article it made mode sense:

https://en.wikipedia.org/wiki/Cracovian

The Cracovian product of two matrices, say A and B, is defined by

A ∧ B = BT A,

where BT and A are assumed compatible for the common (Cayley) type of matrix multiplication and BT is the transpose of B.

Since (AB)T = BT AT, the products (A ∧ B) ∧ C and A ∧ (B ∧ C) will generally be different; thus, Cracovian multiplication is non-associative.

A good reference how to use them and why they are useful is here (pdf):

https://archive.computerhistory.org/resources/access/text/20...

adastra22•7mo ago
As far as I can tell I don’t think it is correct to say that this isn’t a matrix. B is just written down in transposed form. Whether that makes the math more or less clear is something you can argue for or against, but it’s the same math and it is confusing to call it something else.
noosphr•7mo ago
It is a tensor of rank two with a special binary operation on tensors. These objects aren't matrices in the mathematical sense any more than convolution kernels aren't.
adastra22•7mo ago
A tensor of rank two is the same thing as a matrix…
noosphr•7mo ago
It isn't.

Matrices come with the matrix product defined over them.

This is one of four possible closed first order tensor contractions for an order two tensor, viz. AijBik AijBki AijBjk AijBkj. Only the third is applicable to matrices, all other contractions only work for general tensors without transposition.

What we deal with in computer science are actually n dimensional arrays since we don't have the co and contravariant indices that define tensors in physics.

adastra22•7mo ago
I think we are talking past each other.

The thing described by the article can be summarized by: "Any time you see A x B, you replace it with A x B^T" and it would, in practice, be exactly the same. I'm not sure the author understood this because then then go on to do a bunch of performance checks to see if there is any difference between the two. Which there isn't, because under the hood it is the exact same operations. They just multiple columns into columns (or rows into rows) instead of rows into columns. But the implicit transpose would undo that.

You can note (correctly) that this doesn't line up with the precise, but arbitrary traditional definition of a matrix, and that is correct. But that is just word games because you can very simply, using only syntax and no calculations, transform one into the other.

gnulinux•7mo ago
I guess I'm skeptical of using a non-associative algebra instead of something that can trivially be made into a ring or field (i.e. matrix algebra). What advantages does this give us?
mci•7mo ago
Author here. There are no practical advantages, as far as I know. Not even faster multiplication on today's computers.
hansvm•7mo ago
One thing that comes up in the sort of code ML I like to write is a careful attention to memory layout. Cracovians, defined according to some sibling comment as (B^T)A, make that a little more natural, since B and A can now have the same layout. I haven't used them though, so I don't have a good sense of whether that's more or less painful than other approaches.
bravesoul2•7mo ago
Shouldn't be the same on a computer right? The change is in human perception not actually what hapens when multiplying.
esafak•7mo ago
Missed a chance to call it the twisted sister!
TimorousBestie•7mo ago
What an interesting little nook of matrix analysis history! Thanks for sharing. :)
noosphr•7mo ago
In Einstein notation this operation is Aij Bkj, which incidentally shows why Einstein notation is so useful.
Syzygies•7mo ago
I'm a mathematician who taught linear algebra for decades. I love Einstein notation. I don't find Cracovians interesting at all.

Old texts got really worked up whether a vector was a row or column. The programming language APL resolved this quite nicely: A scalar has no dimensions, a vector has its length as its one dimension, ... Arbitrary rank objects all played nicely with each other, in this system.

A Cracovian is a character or two's difference in APL code. There's a benign form of mental illness learning anything, where one clutches onto something novel and obsesses over it, rather than asking "That was exciting! What novel idea will I learn in the next five minutes?" I have friends from my working class high school who still say "ASSUME makes an ask of you and me" as if they just heard it for the first time, while the most successful mathematicians that I know keep moving like sharks.

I wouldn't stall too long thinking about Cracovians, as amusing a skim as the post provided.

noosphr•7mo ago
I mean theres nothing special about naming binary operations on tensors of fixed rank. Matrices have some nice mathematical properties which is why they are studied so much in mathematics. But for number crunching there is no reason to prefer then to cracovians, or vice versa, without knowing what the underlying memory layout is in hardware.
semiinfinitely•7mo ago
Uhh so it's just matrices where the left slot of matmul is transposed?
gus_massa•7mo ago
> It turns out that multiplying cracovians by computers is not faster than multiplying matrices.

That's very specific of Python. A few years ago we were multiplying a lot of matrices in Fortran and we tried to transpose one of the matrices before the multiplication. With -o0 it was a huge difference because the calculation used contiguous numbers and was more chache friendly. Anyway, with -o3 the compiler made some trick that made the difference disappear, but I never tried to understand what the compiler was doing.

wjholden•7mo ago
I would expect that Julia could similarly show performance boosts here because of its column-major memory layout.
rundigen12•7mo ago
Read to the end and... what was the point of that? Where's the payoff?

There was a claim near the top that some things are easier to compute when viewed as cracovians. then some explanation, then suddently it switches to numpy and showing the time is the same.

New title: "Cracovians are a Waste of (the Reader's) Time"?