frontpage.
newsnewestaskshowjobs

Made with ♥ by @iamnishanth

Open Source @Github

fp.

When Science Goes Agentic

https://cacm.acm.org/blogcacm/when-science-goes-agentic/
1•tchalla•2m ago•0 comments

Java 26 is here, and with it a solid foundation for the future

https://hanno.codes/2026/03/17/java-26-is-here/
2•mfiguiere•2m ago•0 comments

The Los Angeles Aqueduct Is Wild

https://practical.engineering/blog/2026/3/17/the-los-angeles-aqueduct-is-wild
1•michaefe•2m ago•0 comments

Consent.txt – compile one AI policy into robots.txt, AIPREF, and headers

https://github.com/GGeronik/consent-txt
1•geronik•5m ago•2 comments

Women are being abandoned by their partners on hiking trails

https://www.theguardian.com/lifeandstyle/ng-interactive/2026/mar/17/alpine-divorce-abandoned-hiki...
2•asib•5m ago•0 comments

Show HN: Chrome extension that hijacks any site's own API to modify it

https://github.com/hvardhan878/quark-browser-agent
1•hvardhan878•7m ago•0 comments

Reducing quarantine delay 83% using Genetic Algorithms for playbook optimization

https://www.securesql.info/2025/04/06/playbook-management/
1•projectnexus•7m ago•1 comments

Node.js blocks PR from dev because he used Claude Code to create it

https://github.com/nodejs/node/pull/61478
2•gregdoesit•8m ago•0 comments

Python 3.15's JIT is now back on track

https://fidget-spinner.github.io/posts/jit-on-track.html
2•guidoiaquinti•8m ago•0 comments

Remote Control for Agents

https://www.restate.dev/blog/a-remote-control-for-your-agents
1•gk1•9m ago•0 comments

Danger Coffee: Mold-Free Remineralized Coffee Replaces What Regular Coffee Takes

https://dangercoffee.com/
1•amyjo•9m ago•1 comments

Building a dry-run mode for the OpenTelemetry collector

https://ubuntu.com/blog/building-a-dry-run-mode-for-the-opentelemetry-collector
1•simskij•9m ago•0 comments

LotusNotes

https://computer.rip/2026-03-14-lotusnotes.html
1•laacz•10m ago•0 comments

Austin draws another billionaire as Uber co-founder joins California exodus

https://www.statesman.com/business/article/uber-founder-austin-tech-move-robots-22079819.php
1•dmitrygr•10m ago•0 comments

Deep Data Insights for Polymarket Traders

https://www.holypoly.io
1•alexanderstahl•10m ago•1 comments

Show HN: A simple dream to fit in every traveler's pocket

https://www.callzo.io/blog/we-built-callzo-with-dream-of-being-in-every-travellers-pocket
1•mayursinh•10m ago•0 comments

Rockstar Games stopped selling its digital games directly to players in Brazil

https://support.rockstargames.com/articles/1RrKywdOgzDjAMFbL6ZhbK/latest-information-on-the-digit...
1•throwaway2027•11m ago•0 comments

The US-Israeli strategy against Iran is working. Here is why

https://www.aljazeera.com/opinions/2026/3/16/the-us-israeli-strategy-against-iran-is-working-here...
1•mhb•14m ago•0 comments

John Carmack on corporate advisory boards

https://twitter.com/ID_AA_Carmack/status/2033973070801895832
2•tosh•14m ago•0 comments

Microsoft Announces Copilot Leadership Update

https://blogs.microsoft.com/blog/2026/03/17/announcing-copilot-leadership-update/
1•toomuchtodo•14m ago•0 comments

Designing an AI Gateway and Durable Workflow System

https://stevekinney.com/writing/ai-gateway-durable-workflows
1•stevekinney•15m ago•0 comments

A text-only social platform, with custom algorithm for users

https://contextsocial-0f2d73b46fe0.herokuapp.com/login?callbackUrl=https%3A%2F%2Flocalhost%3A7764%2F
2•icyou780•16m ago•0 comments

Show HN: Automatic Fileless Malware Detection via eBPF Probes and LLMs

https://github.com/Raulgooo/godshell
1•raulgooo•17m ago•0 comments

Kagi's Orion browser hits public beta on Linux

https://www.omgubuntu.co.uk/2026/03/orion-for-linux-beta-release
1•mitchbob•18m ago•0 comments

A Big Pharma Company Stalled a Potentially Lifesaving Vaccine

https://www.propublica.org/article/how-big-pharma-company-stalled-tuberculosis-vaccine-to-pursue-...
2•marvinborner•18m ago•0 comments

Nvidia Just Made the Claw Enterprise-Ready

https://nervegna.substack.com/p/nvidia-just-made-the-claw-enterprise
1•tacon•18m ago•0 comments

Notes from a Law Professor with No Idea What's Going On

https://leahey.org/blog/2026/03/17/notes-from-a-law-professor.html
2•tldrthelaw•22m ago•0 comments

Benchmarking Distilled Language Models for Performance and Efficiency

https://arxiv.org/abs/2602.20164
2•PaulHoule•22m ago•0 comments

Show HN: A complete, containerized data engineering learning platform

https://github.com/MarlonRibunal/learning-data-engineering
1•MarlonPro•23m ago•1 comments

Search Quality Assurance with AI as a Judge

https://engineering.zalando.com/posts/2026/03/search-quality-assurance-with-llm-judge.html
1•hrmtst93837•23m ago•0 comments
Open in hackernews

Ask HN: How are you doing technical interviews in the age of Claude/ChatGPT?

5•jonjou•2h ago
I’m a founder/dev trying to figure out a better way to do technical interviews, because the current state is a nightmare.

Right now, every standard take-home or HackerRank/LeetCode test is easily solved by LLMs. As a result, companies are accidentally hiring what we call vibe coders, candidates who are phenomenal at prompting AI to generate boilerplate, but who completely freeze when the architecture gets complex, when things break, or when the AI subtly hallucinates.

We are working on a new approach and I want to validate the engineering logic with the people who actually conduct these interviews.

Instead of trying to ban AI (which is a losing battle), we want to test for "AI Steering".

The idea: 1. Drop the candidate into a real, somewhat messy sandbox codebase.

2. Let them use whatever AI they want.

3. Inject a subtle architectural shift, a breaking dependency, or an AI hallucination.

4. Measure purely through telemetry (Git diffs, CI/CD runs, debugging paths) how they recover and fix the chaos.

Basically: Stop testing syntax, start testing architecture and debugging skills in the age of AI.

Before we spend months building out the backend for this simulation, I need a reality check from experienced leads: 1. Does testing a candidate's ability to "steer" and debug AI-generated code make more sense to you than traditional algorithms?

2. How are you currently preventing these "prompt-only" developers from slipping through your own interview loops?

(Not linking anything here because there's nothing to sell yet, just looking for brutal feedback on the methodology.)

Comments

dakiol•1h ago
> 1. Does testing a candidate's ability to "steer" and debug AI-generated code make more sense to you than traditional algorithms?

Testing the candidate's ability to "steer" agents seems to be like testing their ability to know the Java API or to recite SOLID by heart.

> 2. How are you currently preventing these "prompt-only" developers from slipping through your own interview loops?

We don't ask anymore leetcode. We keep the usual systems design interview in which usage of AI is not needed (or at least we don't allow it because in this kind of interview we are more interested in seeing how the candidate thinks and so on)

We have a new stage in our job interview, though: generic Q/A about the fundamental of software engineering/computer science. Again, we don't care anymore how candidates produce code. We care about what they know, and what they don't know. What's the scope of their knowledge, and when do they need to rely on AI to come up with an answer. Silly (non-real) example: "Can you write a program that detects if another program halts?". The people we want are the ones who would say something about the Halting Problem but also perhaps be practical and perhaps ask more questions about such a program requirements.

You get the point: we look for people with a good breadth of knowledge, who can communicate well and know their shit. Whether they can use tool x or y (including LLMs), comes for granted for such people

jonjou•46m ago
This is a fantastic perspective, thank you. You hit the nail on the head: the ultimate goal is testing fundamental engineering breadth and systems thinking, not tool usage.

I should definitely clarify my use of the word steering — I completely agree that testing prompt engineering is just the new API memorization, which is useless.

By steering, I mean putting them in a situation where the AI generates a plausible but architecturally flawed solution, and seeing if they have the fundamental knowledge to spot the BS, understand the scope of the problem, and fix it.

Basically, an automated way to test the exact critical thinking you mentioned.

I love your approach of dropping LeetCode for fundamentals Q/A and Systems Design. But out of curiosity, how do you scale that at the top of the funnel? Doing deep, manual 1-on-1 assessments gives the best signal by far, but doesn't that burn a massive amount of your senior engineers' time?