frontpage.
newsnewestaskshowjobs

Made with ♥ by @iamnishanth

Open Source @Github

fp.

Follow the Money is phasing out U.S. tech – follow the journey

https://www.ftm.eu/articles/follow-the-money-is-phasing-out-us-tech
1•YounesDz•1m ago•0 comments

AI Rivalry at AI Summit

https://twitter.com/CNBCTV18News/status/2024428069851959500
1•anonymousiam•2m ago•1 comments

Three Engineers Charged with Stealing Trade Secrets from Leading Tech Companies

https://www.justice.gov/usao-ndca/pr/silicon-valley-engineers-charged-stealing-trade-secrets-lead...
2•trimbo•3m ago•0 comments

Show HN: GameZipper – 12 Free HTML5 Browser Games (No Ads, No Login)

https://gamezipper.com/
1•LetusWinyj•5m ago•0 comments

The Robin Hood state is coming for the rich

https://www.economist.com/international/2026/02/16/the-robin-hood-state-is-coming-for-the-rich
2•andsoitis•7m ago•0 comments

Iowa farmers swapped pigs for mushrooms

https://www.theguardian.com/environment/2026/feb/19/why-iowa-farmers-swapped-pigs-mushrooms
2•andsoitis•12m ago•0 comments

Show HN: 17MB model beats human experts at pronunciation scoring

https://huggingface.co/spaces/fabiosuizu/pronunciation-assessment
4•fabiosuizu•15m ago•1 comments

Claude Desktop on Windows Broke MCP

https://github.com/anthropics/claude-code/issues/26073
2•0xFFFC•16m ago•1 comments

No respite for World Cup fans affected by Trump travel restrictions

https://www.espn.com/soccer/story/_/id/47966503/haiti-senegal-ivory-coast-iran-travel-visa-ban-wo...
2•1659447091•17m ago•0 comments

Bring Back – AI that helps you reflect before texting your ex

https://1nk.ai/bring-back
1•iasheyam•20m ago•1 comments

Nvidia is in talks to invest up to $30B in OpenAI, source says

https://www.cnbc.com/2026/02/19/nvidia-is-in-talks-to-invest-up-to-30-billion-in-openai-source-sa...
1•gradus_ad•22m ago•0 comments

How to Use Clarity's AI Bot Activity Report

https://www.culturefoundry.com/cultivate/content-strategy/how-to-use-claritys-ai-bot-report-for-s...
1•mooreds•25m ago•0 comments

Show HN: CMV – strip up to 70% of Claude Code without losing any conversation

https://github.com/CosmoNaught/claude-code-cmv
1•CosmoSantoni•27m ago•0 comments

Bungled Boeing Starliner mission put stranded NASA crew at risk

https://www.livescience.com/space/space-exploration/there-will-be-leadership-accountability-bungl...
1•ColinWright•28m ago•0 comments

Silicon Valley's Favorite Doomsaying Philosopher

https://www.newyorker.com/culture/the-lede/silicon-valleys-favorite-doomsaying-philosopher
1•mitchbob•28m ago•1 comments

Prompt Repetition Improves Non-Reasoning LLMs

https://arxiv.org/abs/2512.14982
1•beatthatflight•30m ago•0 comments

Podcast should not disappear after 72 hours. Make it a searchable asset

https://podcastarchiveengine.vercel.app/
1•jdcampolargo•30m ago•0 comments

PCB Forge

https://castpixel.itch.io/pcb-forge
2•themaxdavitt•31m ago•0 comments

Show HN: Codedocent – Code visualization for non-programmers

https://github.com/clanker-lover/codedocent
2•clanker-lover•35m ago•0 comments

Exposing biases, moods, personalities, and abstract concepts hidden in LLMs

https://news.mit.edu/2026/exposing-biases-moods-personalities-hidden-large-language-models-0219
1•geox•36m ago•0 comments

Trump order seeks to protect weedkiller at center of barrage of lawsuits

https://www.cnbc.com/2026/02/19/trump-kennedy-glyphosate-maha-midterms-rfk-jr.html
3•thread_id•40m ago•0 comments

Brain-like computers could be built out of perovskites

https://economist.com/science-and-technology/2026/02/18/brain-like-computers-could-be-built-out-o...
2•andsoitis•41m ago•0 comments

Frontier Model Training Methodologies

https://djdumpling.github.io/2026/01/31/frontier_training.html
1•vinhnx•41m ago•0 comments

Nullclaw: OpenClaw but in Zig

https://github.com/nullclaw/nullclaw
1•handfuloflight•44m ago•0 comments

Show HN: Antenna, a command center for OpenClaw agents

https://antenna.chat
1•cr1st1an•44m ago•0 comments

Show HN: 150M AI-Generated Q&A Pages Static

https://qeeebo.com
2•qeeebo•47m ago•0 comments

We Built an Agent Context Management System

https://venturecrane.com/articles/agent-context-management-system/
1•smdurgan•48m ago•0 comments

An RPI inspired CONTRIBUTING.md to help AI's work and keep humans in the loop

https://gist.github.com/rjcorwin/296885590dc8a4ebc64e70879dc04a0f
1•rjcorwin•48m ago•0 comments

Show HN: SalaryScript – The FAANG Negotiation Playbook

https://salaryscript.com
1•corefiredrill•49m ago•0 comments

Show HN: I indexed the academic papers buried in the DOJ Epstein Files

https://jeescholar.com/
4•am-seo•49m ago•0 comments
Open in hackernews

Numerical Linear Algebra Class in Julia TUM

https://venkovic.github.io/NLA-for-CS-and-IE.html
145•darboux•9mo ago

Comments

staplung•9mo ago
Not exactly the same material but U. Michigan has their Robotics 101 course up as well: Computational Linear Algebra, also in Julia.

https://github.com/michiganrobotics/rob101/tree/main

ted_dunning•9mo ago
This is a nicely comprehensive course, but it looks like it is pretty fast paced, especially in the last few lectures (some of those later slides definitely aren't finished).

As a reference, it looks very useful.

stabbles•9mo ago
A good resource is Gerard Sleijpen's course: https://webspace.science.uu.nl/~sleij101/Opgaven/NumLinAlg/
me3meme•9mo ago
I just selected lecture 07 to take a look: Lecture 07 is about QR factorizacion and Householder reflections. The author proves how to construct a reflection to make zeros in the first column and then he just claims that following this procedure for the other columns finish the proof. But he should prove or justify why the other reflections do not destroy the zeros of previous reflections. Also he proves that a vector v is the vector to construct the reflection (but there is a factor of 2 that was not correctly simplified, maybe a latex error), but I think that it should be more general and easier to prove that for any w the vector from w to its image f(w) is the orthogonal vector to the plane of the reflection.

I thank the author for the slides, but this little proof need some more care, I don't know about the quality of other sections or the overall quality of the slides. Anyway I like how he tries to make things easy but good work is hard.

Edited: I was wondering whether a LLM reading Lecture 7 would detect what was missing in the proof. I tried with deepseek but its first feedback on the Lecture 7 was positive, then when prompted about the incomplete proof it recognized it as a common error and explained how to complete the proof. Also I have to prompt it about the bad factor 2 for it to detect it. So it seems that deepseek is not a useful tool to judge quality of math content without very expert guidance, deepseek suggested to ask the LLM to compare this proof with another proof to detect important or vital differences.

Certhas•9mo ago
That's an absolutely obvious step though? As in, detailed lecture notes should maybe elaborate with a sentence, but in a lecture I would not put this on the slides but mention the core point and expect students at this level (who should have seen some amount of more theoretical LinAlg courses by then) to understand how to do the 1 line calculation.

There aren't even any real details to fill in, you iterate on the lower right block so anything you do is orthogonal to the upper left block. Do a 2x2 block matrix multiplication to convince yourself that this preserves the form achieved so far.

me3meme•9mo ago
-- Do a 2x2 block matrix multiplication to convince yourself that this preserves the form achieved so far.

I don't consider this a proof. Perhaps you have in mind two simple but key properties of reflections about the hyperplane orthogonal to a vector v: (a) The hyperplane of a reflection is the fixed point of the reflection (b) the hyperplane is the orthogonal vector space to the vector space spanned by v. From this two properties it follows that each step of making zeroes does not change previous zeroes.

Your claim that for advanced students there is no need to comment about details it is not falsifiable. Citing Mac Lane: A monad is just a monoid in the category of endofunctors.

But from a practical point of view one can see the very basic level and simplicity of the definitions and calculations prior to the proof. So at this level of detail I consider that noticing that one must be careful to not destroy previous zeros is matching the level of discourse at the proper level.

Certhas•9mo ago
10 LB = LB' 0Q 0A 0A'

The proof says iterate on A, so that obviously creates a lower dimensional rotation Q that will act on the full space as above.

Absolutely mention this in lecture notes/during the lecture.

slwvx•9mo ago
I guess the title would better be "Numerical Linear Algebra Class in Julia at TUM". I.e. the "TUM" in the title does not mean that there's some new "TUM" version of Julia, rather that the class is at the Technical University of Munich.