frontpage.
newsnewestaskshowjobs

Made with ♥ by @iamnishanth

Open Source @Github

fp.

Alias Method

https://en.wikipedia.org/wiki/Alias_method
1•usgroup•4m ago•0 comments

I exposed my Homelab through Cloudflare Tunnels

http://ebourgess.dev/posts/exposing-homelab-through-cloudflare-tunnel/
2•ebourgess•5m ago•0 comments

Christmas 500 years ago was a drunken 6-week feast

https://fortune.com/2025/12/25/medieval-peasant-christmas-was-better-than-modern-holidays-histori...
1•Anon84•9m ago•0 comments

MemCachier Status Currently experiencing instability (for some days already)

https://status.memcachier.com
1•salzig•9m ago•0 comments

ReCollab: Retrieval-Augmented LLMs for Cooperative Ad-Hoc Teammate Modeling

https://arxiv.org/abs/2512.22129
1•StatsAreFun•10m ago•0 comments

Coverage.py sleepy snake logo (2019)

https://nedbatchelder.com/blog/201912/sleepy_snake.html
2•myroon5•10m ago•0 comments

New York's Subway, an Interview with Matthew Algeo

https://www.exasperatedinfrastructures.com/p/the-best-book-i-read-all-year
1•samsklar1•10m ago•0 comments

Show HN: A dynamic key-value IP allowlist for Nginx

https://github.com/dayt0n/kvauth
1•dayt0n•11m ago•0 comments

NYC Mayoral Inauguration Bans Raspberry Pi and Flipper Zero Alongside Explosives

https://blog.adafruit.com/2025/12/30/nyc-mayoral-inauguration-bans-raspberry-pi-and-flipper-zero-...
2•ptorrone•12m ago•0 comments

Show HN: Claude Cognitive – Working memory for Claude Code

https://github.com/GMaN1911/claude-cognitive
4•MirrorEthic•13m ago•1 comments

Nvidia in advanced talks to acquire AI21 in $2-3B deal

https://www.calcalistech.com/ctechnews/article/rkbh00xnzl
1•hbarka•13m ago•1 comments

A Course in Ring Theory

https://arxiv.org/abs/2512.22133
1•StatsAreFun•13m ago•0 comments

The Origami Wheel That Could Explore Lunar Caves

https://www.universetoday.com/articles/the-origami-wheel-that-could-explore-lunar-caves
1•rbanffy•14m ago•0 comments

You're Getting 'Screen Time' Wrong

https://www.theatlantic.com/technology/2025/10/screen-time-television-internet/684659/
1•Anon84•15m ago•0 comments

Exploiting Prime Selection Vulnerabilities in Public Key Cryptography (RSA)

https://arxiv.org/abs/2512.22720
1•bikenaga•15m ago•1 comments

HP told me I need to buy a new motherboard to reset the forgotten BIOS password

https://old.reddit.com/r/laptops/comments/1iauc47/hp_told_me_i_need_to_buy_a_new_motherboard_to/
2•sipofwater•15m ago•0 comments

Flint

https://www.flint.fyi/blog/introducing-flint/
2•tjwds•16m ago•0 comments

Hou Tu Pranownse Inglish

https://www.zompist.com/spell.html
1•aaronspeedy•16m ago•0 comments

Tw93/Mole: Deep clean and optimize your Mac

https://github.com/tw93/Mole
2•sharjeelsayed•18m ago•0 comments

EdgeVec – Vector search in the browser, no server (Rust/WASM)

https://github.com/matte1782/edgevec
1•matteo1782•19m ago•1 comments

Reconstructing UI behavior from video instead of screenshots

https://www.replay.build/learn/behavior-driven-ui-reconstruction
1•ma1or•22m ago•1 comments

Using Perplexity, Firecrawl and Gemini Flash to analyze 305 Links for 12.70 USD

https://vibegui.com/article/shipping-vibegui-bookmarks-v1-architecture-costs-and-lessons
2•gadr90•26m ago•1 comments

Coase's Penguin, Or, Linux and the Nature of the Firm [pdf]

https://www.benkler.org/CoasesPenguin.PDF
3•loughnane•27m ago•0 comments

Gallery of Bad Shell Code

https://github.com/koalaman/shellcheck
3•behnamoh•29m ago•0 comments

The FDA and FMT regulation (2024)

https://www.humanmicrobes.org/blog/fda-fmt-regulation
1•user234683•29m ago•0 comments

Brain immune cells may drive more damage in females than males with Alzheimer's

https://medicalxpress.com/news/2025-12-brain-immune-cells-females-males.html
1•bikenaga•31m ago•1 comments

Deep Filament Extraction for 3D Concrete Printing

https://arxiv.org/abs/2512.00091
1•PaulHoule•31m ago•0 comments

Show HN: Mafia Arena – LLMs play social deduction games against each other

https://mafia-arena.com
1•mohsen1•32m ago•0 comments

Mamdani Will Be Sworn in at Abandoned Subway Station Beneath City Hall

https://www.nytimes.com/2025/12/29/nyregion/mamdani-subway-sworn-in-mayor.html
2•Anon84•34m ago•0 comments

Psilocybin triggers activity-dependent rewiring of large-scale cortical networks

https://www.cell.com/cell/fulltext/S0092-8674(25)01305-4
4•QueensGambit•34m ago•0 comments
Open in hackernews

Professional software developers don't vibe, they control

https://arxiv.org/abs/2512.14012
71•dpflan•2h ago

Comments

game_the0ry•2h ago
> Through field observations (N=13) and qualitative surveys (N=99)...

Not a statistically significant sample size.

superjose•1h ago
Same thoughts exactly.
HPsquared•1h ago
Significance depends on effect size.
bee_rider•1h ago
97 samples is enough to get a 95% confidence level if you accept a 10% margin of error. 99 is not so bad, at least.

https://www.surveymonkey.com/mp/sample-size-calculator/

runtimepanic•2h ago
The title is doing a lot of work here. What resonated with me is the shift from “writing code” to “steering systems” rather than the hype framing. Senior devs already spend more time constraining, reviewing, and shaping outcomes than typing syntax. AI just makes that explicit. The real skill gap isn’t prompt cleverness, it’s knowing when the agent is confidently wrong and how to fence it in with tests, architecture, and invariants. That part doesn’t scale magically.
llmslave2•1h ago
Does using an LLM to craft Hackernews comments count as "steering systems"?
coip•1h ago
You're totally right! It's not steering systems -- it's cooking, apparently
AlotOfReading•1h ago
It's difficult to steer complex systems correctly, because no one has a complete picture of the end goal at the outset. That's why waterfall fails. Writing code agentically means you have to go out of your way to think deeply about what you're building, because it won't be forced on you by the act of writing code. If your requirements are complex, they might actually be a hindrance because you're going have to learn those lessons from failed iterations instead of avoiding them preemptively.
asmor•1h ago
Is anyone else getting more mentally exhausted by this? I get more done, but I also miss the relaxing code typing in the middle of the process.
simonw•1h ago
Yes, absolutely, I can be mentally wiped out by lunch.
jghn•1h ago
That's kind of the point here. Once a dev reached a certain level, they often weren't doing much "relaxing code typing" anyways before the AI movement. I don't find it to be much different than being a tech lead, architect, or similar role.
tikimcfee•1h ago
Ya know, I have to admit feeling something like this. Normally, the amount of stuff I put together in a work day offers a sense of completion or even a bit of a dopamine bump because of a "job well done". With this recent work I've been doing, it's instead felt like I've been spending a multiplier more energy communicating intent instead of doing the work myself; that communication seems to be making me more tired than the work itself. Similar?
whynotminot•1h ago
It feels like we all signed up to be ICs, but now we’re middle managers and our reports are bots.
perfmode•27m ago
You’re possibly not entering into the flow state anymore.

Flow is effortless. and it is rejuvenating.

I believe:

While communication can be satisfying, it’s not as rejuvenating as resting in our own Being and simply allowing the action to unfold without mental contraction.

Flow states.

When the right level of challenge and capability align and you become intimate with the problem. The boundaries of me and the problem dissolve and creativity springs forth. Emerging satisfied. Nourished.

teaearlgraycold•1h ago
I like to alternate focusing on AI wrangling and writing code the old fashioned way.
bugglebeetle•1h ago
Nah, I don’t miss at all typing all the tests, CLIs, and APIs I’ve created hundreds of times before. I dunno if I it’s because I do ML stuff, but it’s almost all “think a lot about something, do some math, and and then type thousands of lines of the same stuff around the interesting work.”
mupuff1234•36m ago
For me it's the opposite, I'm wasting less energy over debugging silly bugs and fighting/figuring out some annoying config.

But it does feel less fulfilling I suppose.

agumonkey•25m ago
I think there are two groups of people emerging. deep / fast / craft-and-decomposition-loving vs black box / outcome-only.

I've seen people unable to work at average speed on small features suddenly reach above average output through a llm cli and I could sense the pride in them. Which is at odds with my experience of work.. I love to dig down, know a lot, model and find abstractions on my own. There a llm will 1) not understand how my brain work 2) produce something workable but that requires me to stretch mentally.. and most of the time I leave numb. In the last month I've seen many people expressing similar views.

sanufar•2m ago
I think for me, the difference really comes down to how much ownership I want to take in regards to the project. If it’s something like a custom kernel that I’m building, the real fun is in reading through docs, learning about systems, and trying to craft the perfect abstractions; but if it’s wiring up a simple pipeline that sends me a text whenever my bus arrives, I’m happy to let an LLM crank that out for me.

I’ve realized that a lot of my coding is on this personal satisfaction vs utility matrix and llms let me focus a lot more energy onto high satisfaction projects

remich•2m ago
I get what you're saying, but I would say that this does not match my own experience. For me, prior to the agentic coding era, the problem was always that I had way more ideas for features, tools, or projects than I had the capacity to build when I had to confront the work of building everything by hand, also dealing with the inevitable difficulties in procrastination and getting started.

I am a very above-average engineer when it comes to speed at completing work well, whether that's typing speed or comprehension speed, and still these tools have felt like giving me a jetpack for my mind. I can get things done in weeks that would have taken me months before, and that opens up space to consider new areas that I wouldn't have even bothered exploring before because I would not have had the time to execute on them well.

codeformoney•35m ago
The stereotype that writing code is for junior developers needs to die. Some devs are hired with lofty titles specifically for their programming aptitude and esoteric systems knowlege, not to play implementation telephone with inexperienced devs.
lesuorac•2h ago
> Most Recent Task for Survey

> Number of Survey Respondents

> Building apps 53

> Testing 1

I think this sums up everybody complaints about AI generated code. Don't ask me to be the one to review work you didn't even check.

rco8786•2h ago
Yea. Nobody wants to be a full-time code reviewer.
jaggederest•1h ago
Hi it's me, the guy who wants to be a full-time code reviewer.
nemo•1h ago
Be careful what you wish for.
4b11b4•1h ago
I like to think of it as "maintaining fertile soil"
banbangtuth•1h ago
You know what. After seeing all these articles about AI/LLM for these past 4 years, about how they are going to replace me as software developers and about how I am not productive enough without using 5 agents and being a project manager.

I. Don't. Care.

I don't even care about those debates outside. Debates about do LLM work and replace programmers? Say they do, ok so what?

I simply have too much fun programming. I am just a mere fullstack business line programmer, generic random replaceable dude, you can find me dime a dozen.

I do use LLM as Stack Overflow/docs replacement, but I always code by hand all my code.

If you want to replace me, replace me. I'll go to companies that need me. If there are no companies that need my skill, fine, then I'll just do this as a hobby, and probably flip burgers outside to make a living.

I don't care about your LLM, I don't care about your agent, I probably don't even care about the job prospects for that matter if I have to be forced to use tools that I don't like and to use workflows I don't like. You can go ahead find others who are willing to do it for you.

As for me, I simply have too much fun programming. Now if you excuse me, I need to go have fun.

agentifysh•1h ago
having fun isn't tied to employment unless you are self-employed even then what's fun should not be the driving force
banbangtuth•1h ago
Why? It is a matter of values. Fun can be a driving force just like money and stability is. It is simply a matter of your values (and your sacrifices).

Like I said, I am just a generic replaceable dime a dozen programmer dude.

agentifysh•1h ago
you dont get paid to have fun but to produce as a laborer

a job isn't supposed to be fun its nice when it is but it shouldn't be what drives decisions

banbangtuth•1h ago
You mean it shouldn't be the driving force of your employer to make decision. Yes I agree 10000%

I meant it can be your (not necessarily your employer) driving decision in life.

Of course, you need to suffer. That's about having tradeoffs.

agentifysh•25m ago
almost all employers are going to expect you to use AI and produce more with it

you can definitely choose not to participate and give the opportunity someone who are happy to use AI and still have fun with it.

llmslave2•1h ago
That sounds miserable to me :(
agentifysh•1h ago
you work on somebody's dime, its no longer your choice
llmslave2•1h ago
It's my life, it's my choice.
zem•40m ago
it's your choice whose dime you work on. they can compete for your work by making it fun for you.
agentifysh•33m ago
sure unemployment is also a choice
lifetimerubyist•1h ago
"get a job doing something you enjoy and you'll never work a day in your life"

or something like that

llmslave2•1h ago
I simply will not spend my life begging and coaxing a machine to output working code. If that is what becomes of this profession, I will just do something else :)
aspenmartin•1h ago
It would definitely be the profession if we stopped developing things today. Think about the idea of coding agents 2 years ago, I personally found them very unrealistic and am now coding exclusively with them despite them being either a neutral or net negative to my development time simply because I see the writing on the wall that in 6 mos to a year they will probably be a huge net positive and in 2-3 years the dismissive attitude towards adoption will start to look kind of silly (no offense). To me we are _just_ at the inflection point where using and not using coding agents are both totally sensible decisions.
ryanobjc•1h ago
If I wanted to do that, I'd just move into engineering management and work with something less temperamental and predictable - humans.

I'd at least be more likely to get a boost in impact and ability to affect decision making, maybe.

lifetimerubyist•1h ago
Until you realize you're just begging and coaxing a human to better beg and coax a machine to output working code - when you could just beg and coax the machine yourself.
llmslave2•1h ago
At least I'd be the one interfacing with a human instead of a machine :P
yacthing•1h ago
Easy to say if you either:

(1) already have enough money to survive without working, or

(2) don't realize how hard of a life it would be to "flip burgers" to make a living in 2026.

We live very good lives as software developers. Don't be a fool and think you could just "flip burgers" and be fine.

banbangtuth•1h ago
Ah, I actually did flip burgers. So I know.

I also did dry cleaning, cleaning service, deli, delivery guy, etc.

Yup I now have enough money to survive without working.

But I also am very low maintenance, thanks to my early life being raised in harsh conditions.

I am not scared to go back flipping burgers again.

lifetimerubyist•1h ago
Hear hear. I didn't spend half my life getting an education, competing in the corporate crab bucket, retraining and upskilling just to turn into a robot babysitter.
websiteapi•1h ago
we've never seen a profession drive themselves so aggressively to irrelevance. software engineering will always exist, but it's amazing the pace to which pressure against the profession is rising. 2026 will be a very happy new year indeed for those paying the salaries. :)
zwnow•1h ago
Also it really baffles me how many are actually in on the hype train. Its a lot more than the crypto bros back in the day. Good thing AI still cant reason and innovate stuff. Also leaking credentials is a felony in my country so I also wont ever attach it to my codebases.
fragmede•1h ago
your credentials shouldn't be in your codebase to begin with!
zwnow•1h ago
.env files are a thing in tons of codebases
iwontberude•1h ago
but thats at runtime, secrets are going to be deployed in a secure manner after the code is released
zwnow•58m ago
.env files are used to develop as well, for some things like PayPal u dont have to change the credentials, you just enable sandbox mode. If I had some LLM attached to my codebase, it would be able to read those credentials from the .env file.

This has nothing to do with deployment. I never talked about deployment.

Carrok•45m ago
If you have your PayPal creds in your repository, you are doing it wrong.
mkozlows•33m ago
If your secrets are in your repo, you've probably already leaked them.
aspenmartin•1h ago
I think the issue is folks talk past each other. People who find coding agents useful or enjoyable are labeled “on the hype train” and folks for which coding agents don’t work for them or their workflow are considered luddites. There are an incredible number of contradicting claims and predictions out there as well, and I believe what we see is folks projecting their reaction to some amalgamation of them onto others. I see a lot of “they” language, and a lot of viral articles about business leadership “shoving AI down our throats” and it becomes a divisive issue like American political scene with really no one having a real conversation
zwnow•1h ago
Its all a hype train though. People still believe in the AI gonna bring utopia bullshit while the current infra is being built on debt. The only reason it still exists is that all these AI companies believe in some kind of revenue outside of subscriptions. So its all about:

Owning the infrastructure and enshittify (ads) once enough products are based on AI.

Its the same chokehold Amazon has on its Vendors.

llmslave2•1h ago
I think the reason for the varying claims and predictions is because developers have wildly different standards for what constitutes working code. For the developers with a lower threshold, AI is like crack to them because gen ai's output is similar to what they would produce, and it really is a 10x speedup. For others, especially those who have to fix and maintain that code, it's more like a 10x slowdown.

Hence why you have in the same thread, some developer who claims that Claude writes 99% of their code and another developer who finds it totally useless. And of course others who are somewhere in the middle.

throw1235435•24m ago
There's also the effect of different models. Until the most recent models, especially for concise algorithms, I felt it was still easier to sometimes do it myself (i.e. a good algo can be concise/more concise than a lossy prompt) and leave the "expansion/repetitive" boilerplate code to the LLM. At least for me the latest models do feel like a "step change" in that the problems can be bigger and/or require less supervision on each problem depending on the tradeoff you want.
simonw•1h ago
We've been giving our work away to each other for free as open source to help improve each other's productivity for 30+ years now and that's only made our profession more valuable.
websiteapi•1h ago
I see little proof open source has resulted in higher wages and not the fact that everything is being digitized and the subsequent demand for such people to assist in such.
simonw•59m ago
I'm not sure how I can prove it, but ~25 years ago building software without open source sucked. You had to build everything from scratch! It took months to get even the most basic things up and running.

I think open source is the single most important productivity boost to our industry that's ever existed. Automated testing is a close second.

Google, Facebook, many others would not have existed without open source to build on.

And those giants and others like them that were enabled by open source employed a TON of people, at competitive rates that greatly increased our salaries.

websiteapi•56m ago
even if that's true it's clear enough AI will reduce the demand for swe
simonw•31m ago
I don't think that's certain. I'm hoping for a Jevons paradox situation where AI drives down the cost of producing software to the point that companies that previously weren't in the market for custom software start hiring software engineers. I think we could see demand go up.
throw1235435•37m ago
Indeed it did; I remember those times. All else being equal I still think SWE salaries on average would of been higher if we kept it like that given basic economics - there would of been a lot less people capable of doing it but the high ROI automation opportunities would of still been there. The fact that "it sucked" usually creates more scarcity on the supply side; which all being equal means higher wages and in our capitalist society - status. Other professions that are older as to the parent comment already know this and don't see SWE as very "street smart" disrupting themselves. I've seen articles recently like "at least we aren't in coding" from law, accounting, etc an an anecdote to this.

With AI at least locally I'm seeing the opposite now - less hiring, less wage pressure and in social circles a lot less status when I mention I'm a SWE (almost sympathy for my lot vs respect only 5 years ago). While I don't care for the status aspect, although I do care for my ability to earn money, some do.

At least locally inflation adjusted in my city SWE wages bought more and were higher in general compared to others in the 90's-2000's than on wards (ex big tech). Partly because this difficulty and low level knowledge meant only very skilled people could participate.

ipdashc•11m ago
> ex big tech

I mean, this seems like a pretty big thing to leave out, no? That's where all the crazy high salaries were!

Also, there are still legacy places that more or less build software like it's 1999. I get the impression that embedded, automotive, and such still rely a lot on proprietary tools, finicky manual processes, low level languages (obviously), etc. But those are notorious for being annoying and not very well paid.

fshacf•52m ago
Then shit suckers like you scraped it all, ignored the licensing, and sold it back to the very people you took from for a premium.
andy99•1h ago
Is the title an ironic play on AI’s trademark writing style, is it AI generated, or is the style just rubbing off on people?
mattnewton•1h ago
I think it’s a popular style before gen ai and the training process of LLMs picked up on that.
andy99•1h ago
That’s not how LLMs work, it’s part of the reinforcement learning or SFT dataset, data labelers would have written or generated tons of examples using this and other patterns (all the emoji READMEs for example) that the models emulate. The early ones had very formulaic essay style outputs that always ended with “in conclusion”, lots of the same kind of bullet lists, and a love of adjectives and delving, all of which were intentionally trained in. It’s more subtle now but it’s still there.
mattnewton•59m ago
Maybe I was being imprecise, but I’m not sure what you mean by “not how LLMs work” - discovering patterns of how humans write is exactly the signal they are trained against. Either explicitly curated like SFT or coaxed out during RLHF, no?

It could even have been picked up in pretraining and then rewarded during rlhf when the output domain was being refined; I haven’t used enough LLMs before post training to know what step it usually becomes noticeable.

zwnow•1h ago
Idk, I still mostly avoid using it and if I do, I just copy and paste shit into the Claude web version. I wont ever manage agents as that sounds just as complicated as coding shit myself.
lexandstuff•36m ago
It's not complicated at all. You don't "manage agents". You just type your prompt into an terminal application that can update files, read your docs and run your tests.

As with every new tech there's a hell of a lot of noise (plugins, skills, hooks, MCP, LSP - to quote Kaparthy) but most of it can just be disregarded. No one is "behind" - it's all very easy to use.

simonw•1h ago
This is pretty recent - the survey they ran (99 respondents) was August 18 to September 23 2025 and the field observations (watching developers for 45 minute then a 30 minute interview, 13 participants) were August 1 to October 3.

The models were mostly GPT-5 and Claude Sonnet 4. The study was too early to catch the 5.x Codex or Claude 4.5 models (bar one mention of Sonnet 4.5.)

This is notable because a lot of academic papers take 6-12 months to come out, by which time the LLM space has often moved on by an entire model generation.

dheera•1h ago
> academic papers take 6-12 months to come out

It takes about 6 months to figure out how to get LaTeX to position figures where you want them, and then another 6 months to fight with reviewers

joenot443•1h ago
Thanks Simon - always quick on the draw.

Off your intuition, do you think the same study with Codex 5.2 and Opus 4.5 would see even better results?

simonw•1h ago
Depends on the participants. If they're cutting-edge LLM users then yes, I think so. If they continue to use LLMs like they would have back in the first half of 2025 I'm not sure if a difference would be noticeable.
mkozlows•34m ago
I'm not remotely cutting edge (just switched from Cursor to Codex CLI, have no fancy tooling infrastructure, am not even vaguely considering git worktrees as a means of working), but Opus 4.5 and 5.2 Codex are both so clearly more competent than previous models that I've started just telling them to do high-level things rather than trying to break things down and give them subtasks.

If people are really set in their ways, maybe they won't try anything beyond what old models can do, and won't notice a difference, but who's had time to get set in their ways with this stuff?

zkmon•1h ago
I haven't seen the definition of an agent, in the paper. Do they differentiate agents from generic online chat interfaces?