frontpage.
newsnewestaskshowjobs

Made with ♥ by @iamnishanth

Open Source @Github

fp.

Show HN: I worked on a sci-fi book for the 2025

https://docs.google.com/document/d/1MvqSpZYA51CDVU0UVeAsA-Y1Q-MVw9wq/edit
1•a1371•2m ago•0 comments

LocalGhost Manifesto: Local-first AI before the defaults ship

https://www.localghost.ai/manifesto
1•zerocool86•2m ago•1 comments

Working with People Is Less Random Than I Thought

https://withxin.substack.com/p/learning-to-meet-people-where-they
1•xincodes•5m ago•0 comments

Early Penguins Had Long, Dagger-Like Beaks for Skewering Fish, Fossils Reveal

https://www.smithsonianmag.com/smart-news/early-penguins-had-long-dagger-like-beaks-for-skewering...
1•noleary•7m ago•0 comments

Rectal oxygenation could save your life one day

https://hackaday.com/2025/12/31/rectal-oxygenation-could-save-your-life-one-day/
2•marbartolome•10m ago•0 comments

Why Prefer Textfiles? (2010)

http://textfiles.com/uploads/textfiles.txt
1•kmstout•12m ago•0 comments

The Watcher of Westfield, New Jersey

https://en.wikipedia.org/wiki/The_Watcher_of_Westfield,_New_Jersey
1•doener•15m ago•0 comments

Brewing the perfect cup – Grinding out the maths behind coffee

https://www.ul.ie/node/11387
1•austinallegro•15m ago•0 comments

The Ridiculous Engineering of the World's Most Important Machine [video]

https://www.youtube.com/watch?v=MiUHjLxm3V0
1•choult•16m ago•0 comments

Tell HN: X/Twitter is now removing the chronological timeline

1•bratao•18m ago•0 comments

TOMLDecoder Is Now Faster Than C (Thanks to AI)

https://duan.ca/2026/01/01/TOMLDecoder-Is-Faster-Than-C/
2•DaNmarner•19m ago•1 comments

Some of your cells are not genetically yours

https://www.nature.com/articles/d41586-025-04102-4
2•jnord•22m ago•0 comments

The Angry Path to Zen: AMD Zen Microcode Tools and Insights [video]

https://media.ccc.de/v/39c3-the-angry-path-to-zen-amd-zen-microcode-tools-and-insights
2•Fnoord•22m ago•0 comments

Americans brace to start new year without healthcare

https://www.bbc.com/news/articles/c98n8lrj7y6o
3•dabinat•23m ago•1 comments

"This website is insane" (stack explained) [video]

https://www.youtube.com/watch?v=HzL65tTeANs
1•ls-a•24m ago•0 comments

Inside China's AI coders' 'village' (2 min video)

https://www.youtube.com/watch?v=ZM0um_Jk1sQ
1•rmason•28m ago•0 comments

Everything You Know About Fitness Is a Lie (2011)

https://www.mensjournal.com/health-fitness/everything-you-know-about-fitness-is-a-lie-20120504
2•dredmorbius•28m ago•2 comments

'Rock candy' technique offers simpler way to capture carbon directly from air

https://techxplore.com/news/2025-12-candy-technique-simpler-capture-carbon.html
2•PaulHoule•29m ago•1 comments

Ask HN: Who wants to be hired? (January 2026)

2•vednig•30m ago•2 comments

Show HN: A Better Kanban Tool

https://tasklanes.app
1•fcuk112•31m ago•0 comments

Show HN: Find High Quality Undervalued Stocks in Minutes

https://findgreatstocks.com/
1•finsummary•31m ago•0 comments

Lock In – A command-line task tracker that docks to the side of your screen

https://www.letslockin.xyz
1•TedOS•32m ago•1 comments

Ask HN: How do you pronounce the name of Anthropic's series of LLMs?

2•alexjplant•32m ago•1 comments

Ask HN: Who is hiring? (January 2026)

4•vednig•37m ago•1 comments

Show HN: ADSBee, an open source dual band embedded ADS-B receiver for anything

https://pantsforbirds.com/adsbee-1090/
3•CoolNamesAllTkn•39m ago•3 comments

LCP File System: Memory-Safe ZFS Alternative

https://github.com/artst3in/lcpfs
1•handfuloflight•41m ago•0 comments

Pickle 1: The first soul computer

https://pickle.com/
1•dmarcos•41m ago•0 comments

Irrational Dedication

https://fs.blog/irrational-dedication/
1•frizlab•41m ago•0 comments

Keeping Sane in the New Year

https://wordpress.jmcgowan.com/wp/keeping-sane-in-the-new-year/
1•NumberSix•42m ago•2 comments

The FBI Wants Al Surveillance Drones with Facial Recognition

https://theintercept.com/2025/11/21/fbi-ai-surveillance-drones-facial-recognition/
2•measurablefunc•42m ago•0 comments
Open in hackernews

Numerical Linear Algebra Class in Julia TUM

https://venkovic.github.io/NLA-for-CS-and-IE.html
145•darboux•8mo ago

Comments

staplung•8mo ago
Not exactly the same material but U. Michigan has their Robotics 101 course up as well: Computational Linear Algebra, also in Julia.

https://github.com/michiganrobotics/rob101/tree/main

ted_dunning•8mo ago
This is a nicely comprehensive course, but it looks like it is pretty fast paced, especially in the last few lectures (some of those later slides definitely aren't finished).

As a reference, it looks very useful.

stabbles•8mo ago
A good resource is Gerard Sleijpen's course: https://webspace.science.uu.nl/~sleij101/Opgaven/NumLinAlg/
me3meme•8mo ago
I just selected lecture 07 to take a look: Lecture 07 is about QR factorizacion and Householder reflections. The author proves how to construct a reflection to make zeros in the first column and then he just claims that following this procedure for the other columns finish the proof. But he should prove or justify why the other reflections do not destroy the zeros of previous reflections. Also he proves that a vector v is the vector to construct the reflection (but there is a factor of 2 that was not correctly simplified, maybe a latex error), but I think that it should be more general and easier to prove that for any w the vector from w to its image f(w) is the orthogonal vector to the plane of the reflection.

I thank the author for the slides, but this little proof need some more care, I don't know about the quality of other sections or the overall quality of the slides. Anyway I like how he tries to make things easy but good work is hard.

Edited: I was wondering whether a LLM reading Lecture 7 would detect what was missing in the proof. I tried with deepseek but its first feedback on the Lecture 7 was positive, then when prompted about the incomplete proof it recognized it as a common error and explained how to complete the proof. Also I have to prompt it about the bad factor 2 for it to detect it. So it seems that deepseek is not a useful tool to judge quality of math content without very expert guidance, deepseek suggested to ask the LLM to compare this proof with another proof to detect important or vital differences.

Certhas•8mo ago
That's an absolutely obvious step though? As in, detailed lecture notes should maybe elaborate with a sentence, but in a lecture I would not put this on the slides but mention the core point and expect students at this level (who should have seen some amount of more theoretical LinAlg courses by then) to understand how to do the 1 line calculation.

There aren't even any real details to fill in, you iterate on the lower right block so anything you do is orthogonal to the upper left block. Do a 2x2 block matrix multiplication to convince yourself that this preserves the form achieved so far.

me3meme•8mo ago
-- Do a 2x2 block matrix multiplication to convince yourself that this preserves the form achieved so far.

I don't consider this a proof. Perhaps you have in mind two simple but key properties of reflections about the hyperplane orthogonal to a vector v: (a) The hyperplane of a reflection is the fixed point of the reflection (b) the hyperplane is the orthogonal vector space to the vector space spanned by v. From this two properties it follows that each step of making zeroes does not change previous zeroes.

Your claim that for advanced students there is no need to comment about details it is not falsifiable. Citing Mac Lane: A monad is just a monoid in the category of endofunctors.

But from a practical point of view one can see the very basic level and simplicity of the definitions and calculations prior to the proof. So at this level of detail I consider that noticing that one must be careful to not destroy previous zeros is matching the level of discourse at the proper level.

Certhas•8mo ago
10 LB = LB' 0Q 0A 0A'

The proof says iterate on A, so that obviously creates a lower dimensional rotation Q that will act on the full space as above.

Absolutely mention this in lecture notes/during the lecture.

slwvx•8mo ago
I guess the title would better be "Numerical Linear Algebra Class in Julia at TUM". I.e. the "TUM" in the title does not mean that there's some new "TUM" version of Julia, rather that the class is at the Technical University of Munich.