frontpage.
newsnewestaskshowjobs

Made with ♥ by @iamnishanth

Open Source @Github

fp.

What an unprocessed photo looks like

https://maurycyz.com/misc/raw_photo/
266•zdw•2h ago•66 comments

Stepping down as Mockito maintainer after 10 years

https://github.com/mockito/mockito/issues/3777
183•saikatsg•4h ago•85 comments

Unity's Mono problem: Why your C# code runs slower than it should

https://marekfiser.com/blog/mono-vs-dot-net-in-unity/
71•iliketrains•2h ago•28 comments

62 years in the making: NYC's newest water tunnel nears the finish line

https://ny1.com/nyc/all-boroughs/news/2025/11/09/water--dep--tunnels-
32•eatonphil•1h ago•9 comments

PySDR: A Guide to SDR and DSP Using Python

https://pysdr.org/content/intro.html
83•kklisura•4h ago•4 comments

Spherical Cow

https://lib.rs/crates/spherical-cow
15•Natfan•1h ago•1 comments

MongoBleed Explained Simply

https://bigdata.2minutestreaming.com/p/mongobleed-explained-simply
67•todsacerdoti•3h ago•16 comments

Show HN: My app just won best iOS Japanese learning tool of 2025 award

https://skerritt.blog/best-japanese-learning-tools-2025-award-show/
8•wahnfrieden•35m ago•1 comments

Slaughtering Competition Problems with Quantifier Elimination

https://grossack.site/2021/12/22/qe-competition.html
14•todsacerdoti•1h ago•0 comments

Researchers Discover Molecular Difference in Autistic Brains

https://medicine.yale.edu/news-article/molecular-difference-in-autistic-brains/
24•amichail•2h ago•8 comments

Growing up in “404 Not Found”: China's nuclear city in the Gobi Desert

https://substack.com/inbox/post/182743659
674•Vincent_Yan404•17h ago•293 comments

Time in C++: Inter-Clock Conversions, Epochs, and Durations

https://www.sandordargo.com/blog/2025/12/24/clocks-part-5-conversions
17•ibobev•2d ago•2 comments

Remembering Lou Gerstner

https://newsroom.ibm.com/2025-12-28-Remembering-Lou-Gerstner
60•thm•5h ago•28 comments

Why I Disappeared – My week with minimal internet in a remote island chain

https://www.kenklippenstein.com/p/why-i-disappeared
20•eh_why_not•2h ago•0 comments

Building a macOS app to know when my Mac is thermal throttling

https://stanislas.blog/2025/12/macos-thermal-throttling-app/
220•angristan•12h ago•96 comments

Intermission: Battle Pulses

https://acoup.blog/2025/12/18/intermission-battle-pulses/
6•Khaine•2d ago•0 comments

Doublespeak: In-Context Representation Hijacking

https://mentaleap.ai/doublespeak/
45•surprisetalk•6d ago•5 comments

Dolphin Progress Report: Release 2512

https://dolphin-emu.org/blog/2025/12/22/dolphin-progress-report-release-2512/
62•akyuu•2h ago•5 comments

Software engineers should be a little bit cynical

https://www.seangoedecke.com/a-little-bit-cynical/
106•zdw•3h ago•77 comments

Learn computer graphics from scratch and for free

https://www.scratchapixel.com
163•theusus•13h ago•20 comments

Show HN: Pion SCTP with RACK is 70% faster with 30% less latency

https://pion.ly/blog/sctp-and-rack/
33•pch07•6h ago•5 comments

As AI gobbles up chips, prices for devices may rise

https://www.npr.org/2025/12/28/nx-s1-5656190/ai-chips-memory-prices-ram
31•geox•1h ago•19 comments

John Malone and the Invention of Liquid-Based Engines

https://permalink.lanl.gov/object/tr?what=info:lanl-repo/lareport/LA-UR-93-1350-25
12•akshatjiwan•4d ago•2 comments

Show HN: Phantas – A browser-based binaural strobe engine (Web Audio API)

https://phantas.io
14•AphantaZach•3h ago•7 comments

One year of keeping a tada list

https://www.ducktyped.org/p/one-year-of-keeping-a-tada-list
217•egonschiele•6d ago•60 comments

Oral History of Richard Greenblatt (2005) [pdf]

https://archive.computerhistory.org/resources/text/Oral_History/Greenblatt_Richard/greenblatt.ora...
9•0xpgm•3d ago•0 comments

CEOs are hugely expensive. Why not automate them?

https://www.newstatesman.com/business/companies/2023/05/ceos-salaries-expensive-automate-robots
118•nis0s•1h ago•100 comments

Calendar

https://neatnik.net/calendar/?year=2026
949•twapi•19h ago•115 comments

Vibration Isolation of Precision Objects (2005) [pdf]

http://www.sandv.com/downloads/0607rivi.pdf
23•nill0•6d ago•2 comments

Designing Predictable LLM-Verifier Systems for Formal Method Guarantee

https://arxiv.org/abs/2512.02080
54•PaulHoule•9h ago•11 comments
Open in hackernews

Keep the Robots Out of the Gym

https://danielmiessler.com/blog/keep-the-robots-out-of-the-gym
36•Group_B•2h ago

Comments

turtleyacht•2h ago
If gym is a mindset, hard to separate from the other.
danielrm26•1h ago
I think that comes down to documenting the mindset as a goal and then using all the AI, scaffolding, and tools available to that system to help you nurture that mindset.
PaulDavisThe1st•2h ago
> [AI/LLM writes] why I made the decisions I did

When one thinks about human decision making, there are at least two classes of decisions:

1. decisions made with our "fast" minds: ducking out of the way of an incoming object, turning around when someone calls our name ... a whole host of decisions made without much if any conscious attention, and that if you asked the human who made those decisions you wouldn't get much useful information about.

2. decisions made with our "slow" minds: deciding which of 3 gifts to get for Aunt Mary, choosing to give a hug to our cousin, deciding to double the chile in the recipe we're cooking ... a whole host of decisions that require conscious reasoning, and if you asked the human who made those decisions you would get a reasonably coherent, explanatory logic chain.

When considering why an LLM "made the decisions that it did", it seems important to understand whether those decisions are closer to type 1 or type 2. If the LLM arrived at them the way we arrive at a type 1 decision, it is not clear that an explanation of why is of much value. If an LLM arrived at them the way we arrive at a type 2 decision, the explanation might be fairly interesting and valuable.

lxgr•36m ago
Does it really matter how the LLM got to a (correct) conclusion?

As long as the explanation is sound as well and I can follow it, I don't really care if the internal process looked quite different, as long as it's not outright deceptive.

PaulDavisThe1st•10m ago
I'm just quoting the author of TFA, who did in fact appear to want periodic explanations of how their "agent" arrived at its decisions.
arm32•2h ago
You're really rizzing up the whole "AI can do almost everything better than humans" point. Is there a chance that your investments are causing you to sensationalize things a bit? Because I can promise you, AI can only do better the things that I have absolutely no skill in.
fragmede•1h ago
If you need to believe that everyone's only in it for their investment portfolio, for you to sleep well at night, I mean, you do you, but recognize that that's a giant balloon of copium you're huffing.
davnicwil•2h ago
It's an interesting one. We'll have to discover where to draw that line in education and training.

It is an incredible accelerant in top-down 'theory driven' learning, which is objectively good, I think we can all agree. Like, it's a better world having that than not having it. But at the same time there's a tension between that and the sort of bottom-up practice-driven learning that's pretty inarguably required for mastery.

Perhaps the answer is as mundane as one must simply do both, and failing to do both will just result in... failure to learn properly. Kind of as it is today except today there's often no truly accessible / convenient top-down option at all therefore it's not a question anyone thinks about.

danielrm26•2h ago
OP here, yeah, I think that's a really good point.

I feel like the way I'm building this in is a violent maintenance of two extremes.

On one hand, fully merged with AI and acting like we are one being, having it do tons of work for me.

And then on the other hand is like this analog gym where I'm stripped of all my augmentations and tools and connectivity, and I am being quizzed on how good I could do just by myself.

And based on how well I can do in the NAUG scenario, that's what determines what tweaks need to be made to regular AUG workflows to improve my NAUG performance.

Especially for those core identity things that I really care about. Like critical thinking, creating and countering arguments, identifying my own bias, etc.

I think as the tech gets better and better, we'll eventually have an assistant whose job is to make sure that our un-augmented performance is improving, vs. deteriorating. But until then, we have to find a way to work this into the system ourselves.

davnicwil•1h ago
there could also be an almost chaos-monkey-like approach of cutting off the assistance at indeterminate intervals, so you've got to maintain a baseline of skill / muscle memory to be able to deal with this.

I'm not sure if people would subject themselves to this, but perhaps the market will just serve it to us as it currently does with internet and services sometimes going down :-)

I know for me when this happens, and also when I sometimes do a bit of offline coding in various situations, it feels good to exercise that skill of just writing code from scratch (erm, well, with intellisense) and kind of re-assert that I can do it now we're in tab-autocomplete land most of the time.

But I guess opting into such a scheme would be one-to-one with the type of self determined discipline required to learn anything in the first place anyway, so I could see it happening for those with at least equal motivation to learn X as exist today.

xboxnolifes•1h ago
How I see it, LLMs aren't really much different than existing information sources. I can watch video tutorials and lectures all day, but if I don't sit down and practice applying what I see, very little of it will stick long term.

The biggest difference I see is, pre-LLM search, I spent a lot more time looking for a good source for what I was looking for, and I probably picked up some information along the way.

mathgeek•1h ago
> We'll have to discover where to draw that line in education and training.

I'm not sure we (meaning society as a whole) are going to have enough say to really draw those lines. Individuals will have more of a choice going forward, just like they did when education was democratized via many other technologies. The most that society will probably have a say in is what folks are allowed to pay for as far as credentials go.

What I worry about most is that AI seems like it's going to make the already large have/not divide grow even more.

davnicwil•1h ago
that's actually what I mean by we. As in, different individuals will try different strategies with it, and we the collective will discover what works based on results.
nhinck2•21m ago
> It is an incredible accelerant in top-down 'theory driven' learning

Is it? People claim this but I really haven't seen any proof that it is true.

llmslave2•2h ago
We want machines to do the laundry and clean our house so we have more time to create art and write code. Seems like in our current trajectory, the machines will produce the art and code so we have more time to clean our house and do laundry....
danielrm26•1h ago
Ah man...that's good.

But maybe both of those are in the category of undesirable things.

And the things we end up with are like art and baking and walking and talking and drinking coffee and such.

Professional Chess is a nice pattern here. A calculator can beat Magnus Carlsen at this point, but Chess is more popular than ever. So it should be ok if AI/Robots are better than us at all the stuff we still decide to do.

sunrunner•1h ago
Except Professional Chess, taken to mean players earning a living solely from paid tournament play, is in the low hundreds? Thousands? Meanwhile there are over 20 million 'professional' software developers. There are many things about that single number demographic that I would argue against, but despite that I'm not sure there's ever been a market for any kind of 'professional chess player', yet there is for 'professional software developer' (for some definitions of 'professional' and 'software').

[1] https://evansdata.com/reports/viewRelease.php?reportID=9

llmslave2•1h ago
Yeah I don't see clankers ever taking over art, music, sports etc. People care in large part about those things because of the human aspect.

I'd love for them to take my job as a programmer though, as that would certainty free up time for me to travel and drink coffee and Guinness.

stavros•1h ago
Where would you find the money to do those things with?
kelseyfrog•1h ago
If all productive human labor is replaced by AI we have larger problems than where we'll find the money.
sunrunner•16m ago
I guess we all just need to come up with a shared definition for ‘productive’ first then? Shouldn’t be too difficult.
llmslave2•59m ago
Hopefully from money I've saved up or a business I started in the meantime.
ares623•1h ago
Maybe cleaning house and doing laundry needs to start being marketed as an art form /s
rzzzt•1h ago
Ryan Anderson: Are We Having Fun Yet? https://www.instagram.com/itsryandanderson/reel/DRR241Mjr_F/
fragmede•1h ago
Very cute. Hardware has very different challenges than software. If you want to get into robotics, it's a growing field with a lot of jobs and money!
rangestransform•51m ago
Anecdotally LLM vibe coding has sucked ass on the big robotics monorepo I work on
ab227•1h ago
This is a great framework for self-development, but I wonder if the Job vs. Gym analogy is a bit premature. There seems to be a level of Silicon Valley optimism here that assumes AI already surpasses human capability in these creative areas. From my perspective, AI only outperforms in areas where the human hasn't developed a real craft. Is it possible that the current hype is causing us to undervalue the unique quality of human-only output?
dnautics•1h ago
- using ai effectively is itself a skill that needs training, especially if you're already good at the critical thinking stuff.

- using AI actually most of everything i do anyways, is critical thinking. I'm constantly reviewing the AI's code and finding little places where the AI tried to get away with a shortcut, or started to overarchitect a solution, or both.

stavros•59m ago
The point that we should do the things we want to be, as in, to have as parts of our identity, is really good insight. Even if the AI can do X well, maybe I want to also be able to do X, therefore I should practice it.

I don't know what those things will be for me, yet, but it's good to have a more specific and directed way to think about which skills I want to keep.

blargey•50m ago
Lurking between the lines in arguments about AI writing/code/art is that whether or not an activity is "gym" or "job" is often in the eye of the beholder.

People who never "went to the gym" in a field are all too eager to brush off the entire design space as pure Job that can and should be fully delegated to AI posthaste.