frontpage.
newsnewestaskshowjobs

Made with ♥ by @iamnishanth

Open Source @Github

fp.

Show HN: I Hacked My Family's Meal Planning with an App

https://mealjar.app
1•melvinzammit•29s ago•0 comments

Sony BMG copy protection rootkit scandal

https://en.wikipedia.org/wiki/Sony_BMG_copy_protection_rootkit_scandal
1•basilikum•3m ago•0 comments

The Future of Systems

https://novlabs.ai/mission/
2•tekbog•3m ago•1 comments

NASA now allowing astronauts to bring their smartphones on space missions

https://twitter.com/NASAAdmin/status/2019259382962307393
2•gbugniot•8m ago•0 comments

Claude Code Is the Inflection Point

https://newsletter.semianalysis.com/p/claude-code-is-the-inflection-point
2•throwaw12•9m ago•1 comments

Show HN: MicroClaw – Agentic AI Assistant for Telegram, Built in Rust

https://github.com/microclaw/microclaw
1•everettjf•9m ago•2 comments

Show HN: Omni-BLAS – 4x faster matrix multiplication via Monte Carlo sampling

https://github.com/AleatorAI/OMNI-BLAS
1•LowSpecEng•10m ago•1 comments

The AI-Ready Software Developer: Conclusion – Same Game, Different Dice

https://codemanship.wordpress.com/2026/01/05/the-ai-ready-software-developer-conclusion-same-game...
1•lifeisstillgood•12m ago•0 comments

AI Agent Automates Google Stock Analysis from Financial Reports

https://pardusai.org/view/54c6646b9e273bbe103b76256a91a7f30da624062a8a6eeb16febfe403efd078
1•JasonHEIN•16m ago•0 comments

Voxtral Realtime 4B Pure C Implementation

https://github.com/antirez/voxtral.c
1•andreabat•18m ago•0 comments

I Was Trapped in Chinese Mafia Crypto Slavery [video]

https://www.youtube.com/watch?v=zOcNaWmmn0A
1•mgh2•24m ago•0 comments

U.S. CBP Reported Employee Arrests (FY2020 – FYTD)

https://www.cbp.gov/newsroom/stats/reported-employee-arrests
1•ludicrousdispla•26m ago•0 comments

Show HN: I built a free UCP checker – see if AI agents can find your store

https://ucphub.ai/ucp-store-check/
2•vladeta•31m ago•1 comments

Show HN: SVGV – A Real-Time Vector Video Format for Budget Hardware

https://github.com/thealidev/VectorVision-SVGV
1•thealidev•33m ago•0 comments

Study of 150 developers shows AI generated code no harder to maintain long term

https://www.youtube.com/watch?v=b9EbCb5A408
1•lifeisstillgood•33m ago•0 comments

Spotify now requires premium accounts for developer mode API access

https://www.neowin.net/news/spotify-now-requires-premium-accounts-for-developer-mode-api-access/
1•bundie•36m ago•0 comments

When Albert Einstein Moved to Princeton

https://twitter.com/Math_files/status/2020017485815456224
1•keepamovin•37m ago•0 comments

Agents.md as a Dark Signal

https://joshmock.com/post/2026-agents-md-as-a-dark-signal/
2•birdculture•39m ago•0 comments

System time, clocks, and their syncing in macOS

https://eclecticlight.co/2025/05/21/system-time-clocks-and-their-syncing-in-macos/
1•fanf2•40m ago•0 comments

McCLIM and 7GUIs – Part 1: The Counter

https://turtleware.eu/posts/McCLIM-and-7GUIs---Part-1-The-Counter.html
2•ramenbytes•43m ago•0 comments

So whats the next word, then? Almost-no-math intro to transformer models

https://matthias-kainer.de/blog/posts/so-whats-the-next-word-then-/
1•oesimania•44m ago•0 comments

Ed Zitron: The Hater's Guide to Microsoft

https://bsky.app/profile/edzitron.com/post/3me7ibeym2c2n
2•vintagedave•47m ago•1 comments

UK infants ill after drinking contaminated baby formula of Nestle and Danone

https://www.bbc.com/news/articles/c931rxnwn3lo
1•__natty__•48m ago•0 comments

Show HN: Android-based audio player for seniors – Homer Audio Player

https://homeraudioplayer.app
3•cinusek•48m ago•2 comments

Starter Template for Ory Kratos

https://github.com/Samuelk0nrad/docker-ory
1•samuel_0xK•50m ago•0 comments

LLMs are powerful, but enterprises are deterministic by nature

2•prateekdalal•53m ago•0 comments

Make your iPad 3 a touchscreen for your computer

https://github.com/lemonjesus/ipad-touch-screen
2•0y•58m ago•1 comments

Internationalization and Localization in the Age of Agents

https://myblog.ru/internationalization-and-localization-in-the-age-of-agents
1•xenator•59m ago•0 comments

Building a Custom Clawdbot Workflow to Automate Website Creation

https://seedance2api.org/
1•pekingzcc•1h ago•1 comments

Why the "Taiwan Dome" won't survive a Chinese attack

https://www.lowyinstitute.org/the-interpreter/why-taiwan-dome-won-t-survive-chinese-attack
2•ryan_j_naughton•1h ago•0 comments
Open in hackernews

Show HN: I replaced my devs with AI agents – and it worked

https://easylab.ai
3•buzzbyjool•9mo ago
I run a small AI company in Luxembourg. We started out as a consulting studio, building custom tools for clients — mostly boring things like dashboards, reporting modules, and CRUD backends.

At some point I realized we were building the same things over and over again. Not in a copy-paste way, but in a “we could generate 80% of this” kind of way. So last year, I ran a live-fire experiment: I asked Claude 3.5 and DeepSeek to build a small admin panel, with tests and API docs, from a plain-language spec.

The result: not great, but usable. It gave us the idea to stop typing code altogether.

Now, at Easylab AI, we don’t write code manually anymore. We use a stack of LLM-powered agents (Claude, DeepSeek, GPT-4) with structured task roles: • an orchestrator agent breaks down the spec • one agent builds back-end logic • another generates test coverage • another checks for security risks • another synthesizes OpenAPI docs • and humans only intervene for review & deployment

Agents talk via a shared context layer we built, and we introduced our own protocol (we call it MCP — Model Context Protocol) to define context flow and fallback behavior.

It’s not perfect. Agents hallucinate. Chaining multiple models can fail in weird ways. Debugging LLM logic isn’t always fun. But…

We’re faster. We ship more. Our team spends more time on logic and less on syntax. And the devs? They’re still here — but they’ve become prompt architects, QA strategists, and AI trainers.

We built Linkeme.ai entirely this way — an AI SaaS for generating social media content for SMEs. It would’ve taken us 3 months before. It took 3 weeks.

Happy to share more details if anyone’s curious. AMA.

Comments

Magma7404•9mo ago
Have you hired a team of security experts to try to crack your web sites and get some validation or certification on the code?
buzzbyjool•9mo ago
notyet but it's a good point, it's a subject !
exceptione•9mo ago
Two thoughts:

- I admire your honesty of telling this. AI is quickly becoming something you rather want to hide.

- I would never want to work at such a company. If I wanted to engineer by human language I would be a politician or a manager. If I wanted to babysit an automaton, I would have been a factory worker.

buzzbyjool•9mo ago
for your second point, I understand your position, but I strongly believe that it's the future of coding. Coding was a way to translate a machine language to something more understandable, AI coding is simply the next step.
fragmede•9mo ago
> If I wanted to babysit an automaton, I would have been a factory worker.

I wonder if horse carriage drivers said the same thing about the advent of cars. Telling the LLM build me a login page instead of laboriously looking up example code in docs and retyping stack overflow snippets is definitely a different way of working, but thinking that makes someone a politician seems like a bit of a stretch.

exceptione•9mo ago
The difference is determinism. A technical inclined person wants to build things from first principles. Assembly or a higher level language are in that nature the same.

Now, most humans are social beings and rather play social "games" with language. That is why technical people used to be called nerds, because they are the exception. Engineers by heart (those of their own choosing rather than because of economical pressure) love the technical reasoning part of their brain.

Now a stochastic model that may lie to you, or respond differently on how your word it today, is a completely different kind of work. It is in principle not engineering, but rather some kind of managing or influencing.

myk9001•9mo ago
What's your backup plan if you don't mind sharing?
buzzbyjool•9mo ago
what you mean by backup plan ? We produce proper code like node.js or similar that is backup and proceed in a normal pipeline. Just the production of the code is different.
myk9001•9mo ago
> Just the production of the code is different.

Just like you, I think prompting LLMs to produce code for us is the future of the profession. Not necessarily a fan, this is just how I see the reality of it. The person I'm replying to feels this ruins the profession for them, if I'm reading their comment right. Hence the question.

Edit: Oh, you're the post's author. Thanks for sharing and I hope the business is going strong.

exceptione•9mo ago
I don't understand why you attract downvotes, tried to upvote you, but to answer your question:

I have no real backup plans, but I can see that my (and my peers) knowledge, design sensitivity and architectural skills will become an even more real scarce asset, especially when there is a surge of vibe coded projects.

In the case of OP however, I think he has found a niche (I assume) in which, between deep applications and throw away code, the balance tilts over to the latter. So this is the domain of MS Power Apps, low code prototypes, and Power BI reports. And so, potentially, his personnel was already more apt to not dislike how the nature of their work changed.

buzzbyjool•9mo ago
You’re absolutely right that there’s a spectrum between deep applications and throwaway code. But I wouldn’t place what we’re doing in the Power Apps / low-code / Power BI category.

The systems we’re building with LLMs (at Easylab AI) aren’t quick prototypes or business dashboards — they’re fully functioning SaaS platforms with scalable backends, custom business logic, API orchestration, test coverage, and long-term maintainability. The difference is: they’re authored through agents, not typed from scratch.

And to your point about design sensitivity and architecture becoming scarce — I couldn’t agree more.

When LLMs handle 80% of the syntactic work, what’s left is the hard stuff: system thinking, naming, sequencing, interfaces, data flows. That’s exactly where our team shifted: less “builders,” more “designers of builders.” It’s not easier work — it’s just a different level of abstraction.

Thanks for the reply, sincerely. It’s good to talk about this without defaulting to hype or fear.

exceptione•9mo ago
I wanted to thank you as well. I replied you somewhere else in this thread with some more questions if you feel like.
buzzbyjool•9mo ago
I get the discomfort — I felt the same early on. But I think there’s a misunderstanding of what’s actually happening under the hood with modern code-focused LLMs.

We’re no longer in the realm of vague completions. Models like DeepSeek or Claude 3.7 aren’t just stochastic parrots — they operate like abstract interpreters, capable of holding internal representations of logic, system design, even refactoring strategies. And when you constrain them properly — through role separation, test feedback, context anchoring — they become extremely reliable. Not perfect, but engineerable.

What you describe as “managing” or “influencing” is, in our case, more like building structured interpreter stacks. We define agent roles, set execution patterns, log every decision, inject type-checked context. It’s messy, yes, but no more magical than compiling C into assembly. Just at a radically higher level of abstraction.

There’s a quote that captures this well. In March 2024, Jensen Huang (NVIDIA CEO) said:

“English is now the world’s most popular programming language.”

That’s not hyperbole. It reflects a shift in interface — not in intent. LLMs let us program systems using natural abstractions, while still exposing deterministic structure when designed that way.

To me, LLMs are not the death of engineering. They’re the beginning of a new kind. I truly believe the next 10 years will make most traditional programming languages obsolete. We’ll go from prompt → code to prompt → compiled binary, bypassing syntax entirely.

exceptione•9mo ago
Thanks for following up, maybe I can learn something. I wonder what you mean by a "shared context layer"? Do you run everything local on big rigs and did you train your own models?

The idea I have got now is that you let general off-the-shelf AI models role-play, and one hands it over to the other? But how would you be able to let those use a shared context layer, that is also typed? How is feedback organized in that process?

buzzbyjool•9mo ago
Great questions — and yes, you’ve got the right intuition: we orchestrate role-specific agents using off-the-shelf LLMs (Claude 3.7, DeepSeek GPT 4.1, GPT-4 Turbo), and they “hand off” tasks between each other. But to avoid total chaos (or hallucinated collaboration), we had to build a few things around them.

The “shared context layer” is essentially a lightweight memory and coordination layer that persists project state, intermediate decisions, and validated outputs. It’s not a traditional vector store or RAG setup. Instead, we use: • A Redis-backed scratchpad with typed slots for inputs, constraints, decisions, outputs, and feedback • An MCP (Model Context Protocol) template that defines what agents should expect, expose, and inherit • Each agent works statelessly, but gets a structured payload that includes relevant validated history, filtered to reduce noise

Agents don’t have full access to each other’s output logs (too much context = hallucination risk). Instead, each one produces an “artifact” + optional feedback object. These go into the shared layer, and the orchestrator decides what the next agent should receive and in what form.

We don’t run anything locally (yet). It’s all API-based for now, with orchestration handled in a containerized layer. That will probably evolve if we scale into more sensitive verticals.

Hope that helps clarify. Happy to dig deeper if you want building something similar.

exceptione•9mo ago
That clears things up, interesting for sure. Would love to see how this works in practice. If you ever give demo's or write blogs count me in. :)
Magma7404•9mo ago
Cars are reliable and when they break down we can fix them ourselves. I can do that too with any compiler or tool. AI is still and will always be a black box that could hallucinate any moment.

You could say the same with software embedded in new cars, but I would reply that it's the kind of cars that I wouldn't want to drive. Also the car makers have a legal responsibility to make sure it behaves well on the road. AI companies have no responsibilities and put a lot of hidden stuff (like censorship) in their products which makes them unacceptable to me. LLMs are unreliable tools by definition which is not a good thing compared to what I use all the time.

bennydog224•9mo ago
How do your devs feel about this in regards to their career? Are they worried about their DSA/coding skills atrophying? Not knocking, just genuinely curious.
buzzbyjool•9mo ago
Great question — and one we took seriously early on.

At first, there was some skepticism, and even a bit of anxiety. When we said, “We’re going full AI-assisted development,” the natural reaction was: “What does that mean for my skillset?”

But here’s what happened in practice:

Most of the repetitive tasks — CRUD, glue logic, API boilerplate — disappeared. Instead, devs started focusing on system design, agent orchestration, prompt engineering, constraint writing, testing strategy, and overall architecture.

And they’re thriving.

Nobody’s DSA muscles are atrophying — they’re just being used differently. If anything, they’ve gained new skills that aren’t widely available yet: how to design workflows with stochastic tools, how to debug agent behavior, how to build structured memory into LLM stacks. These are things you won’t find in textbooks yet, but they’re very real problems — and deeply technical.

And let’s be real: you don’t forget how to reverse a linked list just because you stopped manually writing route handlers for user creation.

In short: the devs that leaned into it have grown faster, not slower. And the ones who felt it wasn’t for them — they moved on. Which is fine.

Every shift in tooling brings a kind of Darwinian filtering. It’s not about better or worse, just about who’s willing to adapt to a new abstraction layer. And that’s always been part of how tech evolves.