frontpage.
newsnewestaskshowjobs

Made with ♥ by @iamnishanth

Open Source @Github

fp.

A Night Without the Nerds – Claude Opus 4.6, Field-Tested

https://konfuzio.com/en/a-night-without-the-nerds-claude-opus-4-6-in-the-field-test/
1•konfuzio•2m ago•0 comments

Could ionospheric disturbances influence earthquakes?

https://www.kyoto-u.ac.jp/en/research-news/2026-02-06-0
1•geox•4m ago•0 comments

SpaceX's next astronaut launch for NASA is officially on for Feb. 11 as FAA clea

https://www.space.com/space-exploration/launches-spacecraft/spacexs-next-astronaut-launch-for-nas...
1•bookmtn•5m ago•0 comments

Show HN: One-click AI employee with its own cloud desktop

https://cloudbot-ai.com
1•fainir•8m ago•0 comments

Show HN: Poddley – Search podcasts by who's speaking

https://poddley.com
1•onesandofgrain•8m ago•0 comments

Same Surface, Different Weight

https://www.robpanico.com/articles/display/?entry_short=same-surface-different-weight
1•retrocog•11m ago•0 comments

The Rise of Spec Driven Development

https://www.dbreunig.com/2026/02/06/the-rise-of-spec-driven-development.html
2•Brajeshwar•15m ago•0 comments

The first good Raspberry Pi Laptop

https://www.jeffgeerling.com/blog/2026/the-first-good-raspberry-pi-laptop/
3•Brajeshwar•15m ago•0 comments

Seas to Rise Around the World – But Not in Greenland

https://e360.yale.edu/digest/greenland-sea-levels-fall
2•Brajeshwar•15m ago•0 comments

Will Future Generations Think We're Gross?

https://chillphysicsenjoyer.substack.com/p/will-future-generations-think-were
1•crescit_eundo•18m ago•0 comments

State Department will delete Xitter posts from before Trump returned to office

https://www.npr.org/2026/02/07/nx-s1-5704785/state-department-trump-posts-x
2•righthand•21m ago•1 comments

Show HN: Verifiable server roundtrip demo for a decision interruption system

https://github.com/veeduzyl-hue/decision-assistant-roundtrip-demo
1•veeduzyl•22m ago•0 comments

Impl Rust – Avro IDL Tool in Rust via Antlr

https://www.youtube.com/watch?v=vmKvw73V394
1•todsacerdoti•23m ago•0 comments

Stories from 25 Years of Software Development

https://susam.net/twenty-five-years-of-computing.html
3•vinhnx•23m ago•0 comments

minikeyvalue

https://github.com/commaai/minikeyvalue/tree/prod
3•tosh•28m ago•0 comments

Neomacs: GPU-accelerated Emacs with inline video, WebKit, and terminal via wgpu

https://github.com/eval-exec/neomacs
1•evalexec•33m ago•0 comments

Show HN: Moli P2P – An ephemeral, serverless image gallery (Rust and WebRTC)

https://moli-green.is/
2•ShinyaKoyano•37m ago•1 comments

How I grow my X presence?

https://www.reddit.com/r/GrowthHacking/s/UEc8pAl61b
2•m00dy•38m ago•0 comments

What's the cost of the most expensive Super Bowl ad slot?

https://ballparkguess.com/?id=5b98b1d3-5887-47b9-8a92-43be2ced674b
1•bkls•39m ago•0 comments

What if you just did a startup instead?

https://alexaraki.substack.com/p/what-if-you-just-did-a-startup
5•okaywriting•46m ago•0 comments

Hacking up your own shell completion (2020)

https://www.feltrac.co/environment/2020/01/18/build-your-own-shell-completion.html
2•todsacerdoti•49m ago•0 comments

Show HN: Gorse 0.5 – Open-source recommender system with visual workflow editor

https://github.com/gorse-io/gorse
1•zhenghaoz•49m ago•0 comments

GLM-OCR: Accurate × Fast × Comprehensive

https://github.com/zai-org/GLM-OCR
1•ms7892•50m ago•0 comments

Local Agent Bench: Test 11 small LLMs on tool-calling judgment, on CPU, no GPU

https://github.com/MikeVeerman/tool-calling-benchmark
1•MikeVeerman•51m ago•0 comments

Show HN: AboutMyProject – A public log for developer proof-of-work

https://aboutmyproject.com/
1•Raiplus•51m ago•0 comments

Expertise, AI and Work of Future [video]

https://www.youtube.com/watch?v=wsxWl9iT1XU
1•indiantinker•52m ago•0 comments

So Long to Cheap Books You Could Fit in Your Pocket

https://www.nytimes.com/2026/02/06/books/mass-market-paperback-books.html
4•pseudolus•52m ago•2 comments

PID Controller

https://en.wikipedia.org/wiki/Proportional%E2%80%93integral%E2%80%93derivative_controller
1•tosh•56m ago•0 comments

SpaceX Rocket Generates 100GW of Power, or 20% of US Electricity

https://twitter.com/AlecStapp/status/2019932764515234159
2•bkls•57m ago•1 comments

Kubernetes MCP Server

https://github.com/yindia/rootcause
1•yindia•58m ago•0 comments
Open in hackernews

Show HN: I replaced my devs with AI agents – and it worked

https://easylab.ai
3•buzzbyjool•9mo ago
I run a small AI company in Luxembourg. We started out as a consulting studio, building custom tools for clients — mostly boring things like dashboards, reporting modules, and CRUD backends.

At some point I realized we were building the same things over and over again. Not in a copy-paste way, but in a “we could generate 80% of this” kind of way. So last year, I ran a live-fire experiment: I asked Claude 3.5 and DeepSeek to build a small admin panel, with tests and API docs, from a plain-language spec.

The result: not great, but usable. It gave us the idea to stop typing code altogether.

Now, at Easylab AI, we don’t write code manually anymore. We use a stack of LLM-powered agents (Claude, DeepSeek, GPT-4) with structured task roles: • an orchestrator agent breaks down the spec • one agent builds back-end logic • another generates test coverage • another checks for security risks • another synthesizes OpenAPI docs • and humans only intervene for review & deployment

Agents talk via a shared context layer we built, and we introduced our own protocol (we call it MCP — Model Context Protocol) to define context flow and fallback behavior.

It’s not perfect. Agents hallucinate. Chaining multiple models can fail in weird ways. Debugging LLM logic isn’t always fun. But…

We’re faster. We ship more. Our team spends more time on logic and less on syntax. And the devs? They’re still here — but they’ve become prompt architects, QA strategists, and AI trainers.

We built Linkeme.ai entirely this way — an AI SaaS for generating social media content for SMEs. It would’ve taken us 3 months before. It took 3 weeks.

Happy to share more details if anyone’s curious. AMA.

Comments

Magma7404•9mo ago
Have you hired a team of security experts to try to crack your web sites and get some validation or certification on the code?
buzzbyjool•9mo ago
notyet but it's a good point, it's a subject !
exceptione•9mo ago
Two thoughts:

- I admire your honesty of telling this. AI is quickly becoming something you rather want to hide.

- I would never want to work at such a company. If I wanted to engineer by human language I would be a politician or a manager. If I wanted to babysit an automaton, I would have been a factory worker.

buzzbyjool•9mo ago
for your second point, I understand your position, but I strongly believe that it's the future of coding. Coding was a way to translate a machine language to something more understandable, AI coding is simply the next step.
fragmede•9mo ago
> If I wanted to babysit an automaton, I would have been a factory worker.

I wonder if horse carriage drivers said the same thing about the advent of cars. Telling the LLM build me a login page instead of laboriously looking up example code in docs and retyping stack overflow snippets is definitely a different way of working, but thinking that makes someone a politician seems like a bit of a stretch.

exceptione•9mo ago
The difference is determinism. A technical inclined person wants to build things from first principles. Assembly or a higher level language are in that nature the same.

Now, most humans are social beings and rather play social "games" with language. That is why technical people used to be called nerds, because they are the exception. Engineers by heart (those of their own choosing rather than because of economical pressure) love the technical reasoning part of their brain.

Now a stochastic model that may lie to you, or respond differently on how your word it today, is a completely different kind of work. It is in principle not engineering, but rather some kind of managing or influencing.

myk9001•9mo ago
What's your backup plan if you don't mind sharing?
buzzbyjool•9mo ago
what you mean by backup plan ? We produce proper code like node.js or similar that is backup and proceed in a normal pipeline. Just the production of the code is different.
myk9001•9mo ago
> Just the production of the code is different.

Just like you, I think prompting LLMs to produce code for us is the future of the profession. Not necessarily a fan, this is just how I see the reality of it. The person I'm replying to feels this ruins the profession for them, if I'm reading their comment right. Hence the question.

Edit: Oh, you're the post's author. Thanks for sharing and I hope the business is going strong.

exceptione•9mo ago
I don't understand why you attract downvotes, tried to upvote you, but to answer your question:

I have no real backup plans, but I can see that my (and my peers) knowledge, design sensitivity and architectural skills will become an even more real scarce asset, especially when there is a surge of vibe coded projects.

In the case of OP however, I think he has found a niche (I assume) in which, between deep applications and throw away code, the balance tilts over to the latter. So this is the domain of MS Power Apps, low code prototypes, and Power BI reports. And so, potentially, his personnel was already more apt to not dislike how the nature of their work changed.

buzzbyjool•9mo ago
You’re absolutely right that there’s a spectrum between deep applications and throwaway code. But I wouldn’t place what we’re doing in the Power Apps / low-code / Power BI category.

The systems we’re building with LLMs (at Easylab AI) aren’t quick prototypes or business dashboards — they’re fully functioning SaaS platforms with scalable backends, custom business logic, API orchestration, test coverage, and long-term maintainability. The difference is: they’re authored through agents, not typed from scratch.

And to your point about design sensitivity and architecture becoming scarce — I couldn’t agree more.

When LLMs handle 80% of the syntactic work, what’s left is the hard stuff: system thinking, naming, sequencing, interfaces, data flows. That’s exactly where our team shifted: less “builders,” more “designers of builders.” It’s not easier work — it’s just a different level of abstraction.

Thanks for the reply, sincerely. It’s good to talk about this without defaulting to hype or fear.

exceptione•9mo ago
I wanted to thank you as well. I replied you somewhere else in this thread with some more questions if you feel like.
buzzbyjool•9mo ago
I get the discomfort — I felt the same early on. But I think there’s a misunderstanding of what’s actually happening under the hood with modern code-focused LLMs.

We’re no longer in the realm of vague completions. Models like DeepSeek or Claude 3.7 aren’t just stochastic parrots — they operate like abstract interpreters, capable of holding internal representations of logic, system design, even refactoring strategies. And when you constrain them properly — through role separation, test feedback, context anchoring — they become extremely reliable. Not perfect, but engineerable.

What you describe as “managing” or “influencing” is, in our case, more like building structured interpreter stacks. We define agent roles, set execution patterns, log every decision, inject type-checked context. It’s messy, yes, but no more magical than compiling C into assembly. Just at a radically higher level of abstraction.

There’s a quote that captures this well. In March 2024, Jensen Huang (NVIDIA CEO) said:

“English is now the world’s most popular programming language.”

That’s not hyperbole. It reflects a shift in interface — not in intent. LLMs let us program systems using natural abstractions, while still exposing deterministic structure when designed that way.

To me, LLMs are not the death of engineering. They’re the beginning of a new kind. I truly believe the next 10 years will make most traditional programming languages obsolete. We’ll go from prompt → code to prompt → compiled binary, bypassing syntax entirely.

exceptione•9mo ago
Thanks for following up, maybe I can learn something. I wonder what you mean by a "shared context layer"? Do you run everything local on big rigs and did you train your own models?

The idea I have got now is that you let general off-the-shelf AI models role-play, and one hands it over to the other? But how would you be able to let those use a shared context layer, that is also typed? How is feedback organized in that process?

buzzbyjool•9mo ago
Great questions — and yes, you’ve got the right intuition: we orchestrate role-specific agents using off-the-shelf LLMs (Claude 3.7, DeepSeek GPT 4.1, GPT-4 Turbo), and they “hand off” tasks between each other. But to avoid total chaos (or hallucinated collaboration), we had to build a few things around them.

The “shared context layer” is essentially a lightweight memory and coordination layer that persists project state, intermediate decisions, and validated outputs. It’s not a traditional vector store or RAG setup. Instead, we use: • A Redis-backed scratchpad with typed slots for inputs, constraints, decisions, outputs, and feedback • An MCP (Model Context Protocol) template that defines what agents should expect, expose, and inherit • Each agent works statelessly, but gets a structured payload that includes relevant validated history, filtered to reduce noise

Agents don’t have full access to each other’s output logs (too much context = hallucination risk). Instead, each one produces an “artifact” + optional feedback object. These go into the shared layer, and the orchestrator decides what the next agent should receive and in what form.

We don’t run anything locally (yet). It’s all API-based for now, with orchestration handled in a containerized layer. That will probably evolve if we scale into more sensitive verticals.

Hope that helps clarify. Happy to dig deeper if you want building something similar.

exceptione•9mo ago
That clears things up, interesting for sure. Would love to see how this works in practice. If you ever give demo's or write blogs count me in. :)
Magma7404•9mo ago
Cars are reliable and when they break down we can fix them ourselves. I can do that too with any compiler or tool. AI is still and will always be a black box that could hallucinate any moment.

You could say the same with software embedded in new cars, but I would reply that it's the kind of cars that I wouldn't want to drive. Also the car makers have a legal responsibility to make sure it behaves well on the road. AI companies have no responsibilities and put a lot of hidden stuff (like censorship) in their products which makes them unacceptable to me. LLMs are unreliable tools by definition which is not a good thing compared to what I use all the time.

bennydog224•9mo ago
How do your devs feel about this in regards to their career? Are they worried about their DSA/coding skills atrophying? Not knocking, just genuinely curious.
buzzbyjool•9mo ago
Great question — and one we took seriously early on.

At first, there was some skepticism, and even a bit of anxiety. When we said, “We’re going full AI-assisted development,” the natural reaction was: “What does that mean for my skillset?”

But here’s what happened in practice:

Most of the repetitive tasks — CRUD, glue logic, API boilerplate — disappeared. Instead, devs started focusing on system design, agent orchestration, prompt engineering, constraint writing, testing strategy, and overall architecture.

And they’re thriving.

Nobody’s DSA muscles are atrophying — they’re just being used differently. If anything, they’ve gained new skills that aren’t widely available yet: how to design workflows with stochastic tools, how to debug agent behavior, how to build structured memory into LLM stacks. These are things you won’t find in textbooks yet, but they’re very real problems — and deeply technical.

And let’s be real: you don’t forget how to reverse a linked list just because you stopped manually writing route handlers for user creation.

In short: the devs that leaned into it have grown faster, not slower. And the ones who felt it wasn’t for them — they moved on. Which is fine.

Every shift in tooling brings a kind of Darwinian filtering. It’s not about better or worse, just about who’s willing to adapt to a new abstraction layer. And that’s always been part of how tech evolves.