frontpage.
newsnewestaskshowjobs

Made with ♥ by @iamnishanth

Open Source @Github

fp.

Queueing Theory v2: DORA metrics, queue-of-queues, chi-alpha-beta-sigma notation

https://github.com/joelparkerhenderson/queueing-theory
1•jph•2m ago•0 comments

Show HN: Hibana – choreography-first protocol safety for Rust

https://hibanaworks.dev/
1•o8vm•4m ago•0 comments

Haniri: A live autonomous world where AI agents survive or collapse

https://www.haniri.com
1•donangrey•5m ago•1 comments

GPT-5.3-Codex System Card [pdf]

https://cdn.openai.com/pdf/23eca107-a9b1-4d2c-b156-7deb4fbc697c/GPT-5-3-Codex-System-Card-02.pdf
1•tosh•18m ago•0 comments

Atlas: Manage your database schema as code

https://github.com/ariga/atlas
1•quectophoton•21m ago•0 comments

Geist Pixel

https://vercel.com/blog/introducing-geist-pixel
1•helloplanets•23m ago•0 comments

Show HN: MCP to get latest dependency package and tool versions

https://github.com/MShekow/package-version-check-mcp
1•mshekow•31m ago•0 comments

The better you get at something, the harder it becomes to do

https://seekingtrust.substack.com/p/improving-at-writing-made-me-almost
2•FinnLobsien•33m ago•0 comments

Show HN: WP Float – Archive WordPress blogs to free static hosting

https://wpfloat.netlify.app/
1•zizoulegrande•34m ago•0 comments

Show HN: I Hacked My Family's Meal Planning with an App

https://mealjar.app
1•melvinzammit•35m ago•0 comments

Sony BMG copy protection rootkit scandal

https://en.wikipedia.org/wiki/Sony_BMG_copy_protection_rootkit_scandal
1•basilikum•37m ago•0 comments

The Future of Systems

https://novlabs.ai/mission/
2•tekbog•38m ago•1 comments

NASA now allowing astronauts to bring their smartphones on space missions

https://twitter.com/NASAAdmin/status/2019259382962307393
2•gbugniot•42m ago•0 comments

Claude Code Is the Inflection Point

https://newsletter.semianalysis.com/p/claude-code-is-the-inflection-point
3•throwaw12•44m ago•1 comments

Show HN: MicroClaw – Agentic AI Assistant for Telegram, Built in Rust

https://github.com/microclaw/microclaw
1•everettjf•44m ago•2 comments

Show HN: Omni-BLAS – 4x faster matrix multiplication via Monte Carlo sampling

https://github.com/AleatorAI/OMNI-BLAS
1•LowSpecEng•45m ago•1 comments

The AI-Ready Software Developer: Conclusion – Same Game, Different Dice

https://codemanship.wordpress.com/2026/01/05/the-ai-ready-software-developer-conclusion-same-game...
1•lifeisstillgood•47m ago•0 comments

AI Agent Automates Google Stock Analysis from Financial Reports

https://pardusai.org/view/54c6646b9e273bbe103b76256a91a7f30da624062a8a6eeb16febfe403efd078
1•JasonHEIN•50m ago•0 comments

Voxtral Realtime 4B Pure C Implementation

https://github.com/antirez/voxtral.c
2•andreabat•53m ago•1 comments

I Was Trapped in Chinese Mafia Crypto Slavery [video]

https://www.youtube.com/watch?v=zOcNaWmmn0A
2•mgh2•59m ago•0 comments

U.S. CBP Reported Employee Arrests (FY2020 – FYTD)

https://www.cbp.gov/newsroom/stats/reported-employee-arrests
1•ludicrousdispla•1h ago•0 comments

Show HN: I built a free UCP checker – see if AI agents can find your store

https://ucphub.ai/ucp-store-check/
2•vladeta•1h ago•1 comments

Show HN: SVGV – A Real-Time Vector Video Format for Budget Hardware

https://github.com/thealidev/VectorVision-SVGV
1•thealidev•1h ago•0 comments

Study of 150 developers shows AI generated code no harder to maintain long term

https://www.youtube.com/watch?v=b9EbCb5A408
1•lifeisstillgood•1h ago•0 comments

Spotify now requires premium accounts for developer mode API access

https://www.neowin.net/news/spotify-now-requires-premium-accounts-for-developer-mode-api-access/
1•bundie•1h ago•0 comments

When Albert Einstein Moved to Princeton

https://twitter.com/Math_files/status/2020017485815456224
1•keepamovin•1h ago•0 comments

Agents.md as a Dark Signal

https://joshmock.com/post/2026-agents-md-as-a-dark-signal/
2•birdculture•1h ago•0 comments

System time, clocks, and their syncing in macOS

https://eclecticlight.co/2025/05/21/system-time-clocks-and-their-syncing-in-macos/
1•fanf2•1h ago•0 comments

McCLIM and 7GUIs – Part 1: The Counter

https://turtleware.eu/posts/McCLIM-and-7GUIs---Part-1-The-Counter.html
2•ramenbytes•1h ago•0 comments

So whats the next word, then? Almost-no-math intro to transformer models

https://matthias-kainer.de/blog/posts/so-whats-the-next-word-then-/
1•oesimania•1h ago•0 comments
Open in hackernews

Show HN: Technical Interviews Built for 2025

6•devday-admin•8mo ago
Hey HN,

The way we hire engineers made sense in 2015. But in 2025, when engineers use AI tools daily, we're still testing algorithm memorization on whiteboards..

That's why we're building DevDay. DevDay is built for the new reality of modern engineering work: candidates collaborate with AI teammates, delegate tasks to AI agents, and solve problems using the tools they'd actually use on the job (LLMs, git, and Slac(k) for team communication).

The old interview playbook is fundamentally broken: - Whiteboard anxiety tests don't predict performance - Take-home tests and virtual paired programming get gamed with ChatGPT - Algorithm memorization has zero correlation with debugging prod issues (what you actually deal with in your day to day work)

Here is what we are not: X Another LeetCode clone with AI buzzwords X Replacing engineers with AI X "Disrupting" hiring with magic algorithms

What it actually does: - Tests AI collaboration skills (AI teammates, delegate task to agents, coding assistant integrations) - Simulates real team environments and workflows - Shows problem-solving approach, collaboration and behavioral skills, not memorized solutions - Assesses how candidates think and communicate

Questions for HN because we are genuinely curious:

- Do you assess engineers who work with AI daily? If yes, how do you do it today? - What would technical interviews look like if designed today within your organisation? - Are we testing skills that matter in 2025?

Link: trydevday.com

P.S. - Yes, someone will suggest "just pair program" or "check their GitHub." Great for small teams, doesn't scale when hiring 10+ engineers monthly.

Comments

TheMongoose•8mo ago
No, it didn't make sense in 2015 either. For all the reasons you correctly identified. But if someone tries to make me to use some AI "collaboration" crap to get a job I'm just gonna decline and go work in another field.
devday-admin•8mo ago
I totally understand not wanting to jump through new hoops for a job. The goal isn't to make interviewing harder or add more steps, it's to make it more representative of actual work. But if the current process is working for you as a candidate, then this probably isn't solving your problem.

Appreciate the honest take.

whilenot-dev•8mo ago
> doesn't scale when hiring 10+ engineers monthly

This got strong "rise and grind guys thinking about life hack" vibes pretty quickly. Be serious, who's hiring at that rate currently?!

I see whiteboard interviews as an answer to disputes over opinionated tooling. It's just a pen, a blank canvas, and natural language to communicate. There's probably just pseudocode, no IDEs and zero runtimes.

Bringing "AI teammates" into the mix reintroduces some disputes: Candidates would lack the experience to get around your tool and trigger the right responses. Different LLMs have different "characters". As an engineer you'd want to pick the best tool for the job, and no engineer would like be stuck with your choice. It's usually an effort to figure out and setup such a tool for each distinct project.

Besides that, even technical interviews have the social component to rule out any "cultural" differences within the team. I really doubt that good technical leaders could take that much value out of an AI assessment to just skip any in-person step of the interview.

devday-admin•8mo ago
Fair points on several fronts:

You're right about the "10+ monthly" number - that was probably too high for most companies. More realistic is probably 3-5 monthly for growing startups, but your point stands.

On the tooling disputes - this is actually something we're grappling with. You're absolutely right that engineers want to pick their tools, and being forced into unfamiliar AI interfaces could be worse than a whiteboard. We're experimenting with letting candidates choose their preferred AI assistant (Claude, ChatGPT, etc.) rather than forcing our choice.

The cultural/social component point is spot-on. This isn't meant to replace human interaction entirely - more to supplement the technical assessment part. The in-person cultural evaluation is still crucial.

Curious: do you think there's any way to make technical assessment more realistic without introducing new tool friction?