frontpage.
newsnewestaskshowjobs

Made with ♥ by @iamnishanth

Open Source @Github

fp.

Postgres Message Queue (PGMQ)

https://github.com/pgmq/pgmq
1•Lwrless•1m ago•0 comments

Show HN: Django-rclone: Database and media backups for Django, powered by rclone

https://github.com/kjnez/django-rclone
1•cui•4m ago•1 comments

NY lawmakers proposed statewide data center moratorium

https://www.niagara-gazette.com/news/local_news/ny-lawmakers-proposed-statewide-data-center-morat...
1•geox•5m ago•0 comments

OpenClaw AI chatbots are running amok – these scientists are listening in

https://www.nature.com/articles/d41586-026-00370-w
2•EA-3167•6m ago•0 comments

Show HN: AI agent forgets user preferences every session. This fixes it

https://www.pref0.com/
4•fliellerjulian•8m ago•0 comments

Introduce the Vouch/Denouncement Contribution Model

https://github.com/ghostty-org/ghostty/pull/10559
2•DustinEchoes•10m ago•0 comments

Show HN: SSHcode – Always-On Claude Code/OpenCode over Tailscale and Hetzner

https://github.com/sultanvaliyev/sshcode
1•sultanvaliyev•10m ago•0 comments

Microsoft appointed a quality czar. He has no direct reports and no budget

https://jpcaparas.medium.com/microsoft-appointed-a-quality-czar-he-has-no-direct-reports-and-no-b...
1•RickJWagner•12m ago•0 comments

Multi-agent coordination on Claude Code: 8 production pain points and patterns

https://gist.github.com/sigalovskinick/6cc1cef061f76b7edd198e0ebc863397
1•nikolasi•12m ago•0 comments

Washington Post CEO Will Lewis Steps Down After Stormy Tenure

https://www.nytimes.com/2026/02/07/technology/washington-post-will-lewis.html
4•jbegley•13m ago•0 comments

DevXT – Building the Future with AI That Acts

https://devxt.com
2•superpecmuscles•14m ago•4 comments

A Minimal OpenClaw Built with the OpenCode SDK

https://github.com/CefBoud/MonClaw
1•cefboud•14m ago•0 comments

The silent death of Good Code

https://amit.prasad.me/blog/rip-good-code
3•amitprasad•14m ago•0 comments

The Internal Negotiation You Have When Your Heart Rate Gets Uncomfortable

https://www.vo2maxpro.com/blog/internal-negotiation-heart-rate
1•GoodluckH•16m ago•0 comments

Show HN: Glance – Fast CSV inspection for the terminal (SIMD-accelerated)

https://github.com/AveryClapp/glance
2•AveryClapp•17m ago•0 comments

Busy for the Next Fifty to Sixty Bud

https://pestlemortar.substack.com/p/busy-for-the-next-fifty-to-sixty-had-all-my-money-in-bitcoin-...
1•mithradiumn•17m ago•0 comments

Imperative

https://pestlemortar.substack.com/p/imperative
1•mithradiumn•18m ago•0 comments

Show HN: I decomposed 87 tasks to find where AI agents structurally collapse

https://github.com/XxCotHGxX/Instruction_Entropy
1•XxCotHGxX•22m ago•1 comments

I went back to Linux and it was a mistake

https://www.theverge.com/report/875077/linux-was-a-mistake
3•timpera•23m ago•1 comments

Octrafic – open-source AI-assisted API testing from the CLI

https://github.com/Octrafic/octrafic-cli
1•mbadyl•25m ago•1 comments

US Accuses China of Secret Nuclear Testing

https://www.reuters.com/world/china/trump-has-been-clear-wanting-new-nuclear-arms-control-treaty-...
2•jandrewrogers•25m ago•1 comments

Peacock. A New Programming Language

2•hashhooshy•30m ago•1 comments

A postcard arrived: 'If you're reading this I'm dead, and I really liked you'

https://www.washingtonpost.com/lifestyle/2026/02/07/postcard-death-teacher-glickman/
3•bookofjoe•31m ago•1 comments

What to know about the software selloff

https://www.morningstar.com/markets/what-know-about-software-stock-selloff
2•RickJWagner•35m ago•0 comments

Show HN: Syntux – generative UI for websites, not agents

https://www.getsyntux.com/
3•Goose78•36m ago•0 comments

Microsoft appointed a quality czar. He has no direct reports and no budget

https://jpcaparas.medium.com/ab75cef97954
2•birdculture•36m ago•0 comments

AI overlay that reads anything on your screen (invisible to screen capture)

https://lowlighter.app/
1•andylytic•37m ago•1 comments

Show HN: Seafloor, be up and running with OpenClaw in 20 seconds

https://seafloor.bot/
1•k0mplex•38m ago•0 comments

Tesla turbine-inspired structure generates electricity using compressed air

https://techxplore.com/news/2026-01-tesla-turbine-generates-electricity-compressed.html
2•PaulHoule•39m ago•0 comments

State Department deleting 17 years of tweets (2009-2025); preservation needed

https://www.npr.org/2026/02/07/nx-s1-5704785/state-department-trump-posts-x
5•sleazylice•39m ago•2 comments
Open in hackernews

We’re more patient with AI than with each other

https://www.uxtopian.com/journal/were-more-patient-with-ai-than-one-another
21•lucidplot•3w ago

Comments

CharlieDigital•3w ago
The AI doesn't judge, it doesn't have ego, and generally, if it does poorly, it's more a reflection of the user providing the inputs (giving bad instructions or not enough context).

So in a sense, we are more forgiving of ourselves more than anything.

Grimblewald•3w ago
Eh, sometimes the instructions you need to give are almost the code you need itself, at which point its better to just write the code rather than have it fuck your logic for you.

in fact, in my domain, that's almost always the case. LLM's rarely get it right. Getting something done that would take me a day, takes a day with an LLM, only now I don't fully understand what was written, so no real value add, just loss.

It sure can be nice for solved problems and boilerplate tho.

xboxnolifes•3w ago
> if it does poorly, it's more a reflection of the user providing the inputs (giving bad instructions or not enough context).

Sounds a lot like the understanding we should have with each other.

spwa4•2w ago
Humans are social animals. Any interaction with anyone else (except perhaps kids, and even then) is a competition, or at least, is at risk of turning into a competition at the drop of a hat. And humans just love competing with each other over anything at all, like all social animals do.
lovich•3w ago
I don't find the conclusions plausible. It's completely ignoring that AI is a machine and not in our social hierarchy, while humans are, and we have a large section of wetware devoted to constantly judging the social hierarchy and rules.

At least personally this was obvious to me years before AI was around. Whenever we had clear data that came to an obvious conclusion, I found that it didn't matter if _I_ said the conclusion, regardless of if the data was included. I got a lot more leeway by simply presenting the data to represent my conclusion and let my boss come to it.

In the first situation the conclusion was now _my_ opinion and everyone's feelings got involved. In the second the magic conch(usually a spreadsheet) said the opinion so no feelings were triggered.

Kwpolska•3w ago
> No frustration. No judgment. Just iteration.

[citation needed]

This entire article is just meaningless vibes of one guy who sells AI stuff.

bitwize•3w ago
Also, what are the "rule of three" and constructions of the form "no X, no Y, just Z" indicative of?

Bruh either had help, or he's the most trite writer ever.

lovich•3w ago
Well also get to the point eventually where people are writing like AI because they’re exposed to it so much. I’ve caught myself rephrasing certain posts after I realized it sounded like AI
funnyenough•3w ago
I am more patient with kids, dogs, etc.
dfajgljsldkjag•3w ago
It is funny how we are so willing to iterate on a prompt for ten minutes but we get annoyed when we have to repeat ourselves to a person. I think we could all benefit from not taking things so personally at work.
drooby•3w ago
While I whole heartedly agree with your conclusion..

It's worth noting that much of the frustration stems from expectations.

I don't expect an AI to learn and "update their weights"..

I do however expect colleagues to learn at a specific rate. A rate that I a believe should meet or exceed my company's standards for, uh, human intelligence.

edgarvaldes•3w ago
With a program or machine, I can cut the interaction at any time, walk away and not feel rude.
perrygeo•3w ago
Speaking only of written communication here: I've noticed a distinct trend of people stopping documentation, comments, release notes, etc. intended for human consumption and devoting their writing efforts to building skills, prompts, CLAUDE.md intended for machines.

While my initial reaction was dystopian horror that we're losing our humanity, I feel slightly different after sitting with it for a while.

Ask yourself, how effective was all that effort really? Did any humans actually read and internalize what was written? Or did it just rot in the company wiki? Were we actually communicating effectively with our peers, or just spending lots of time on trying to? Let's not retcon our way to believing the pre-AI days were golden. So much tribal knowledge has been lost, NOT because no one documented it but because no one bothered to read it. Now at least the AI reads it.