frontpage.
newsnewestaskshowjobs

Made with ♥ by @iamnishanth

Open Source @Github

fp.

Show HN: Jwtpeek – minimal, user-friendly JWT inspector in Go

https://github.com/alesr/jwtpeek
1•alesrdev•1m ago•0 comments

Willow – Protocols for an uncertain future [video]

https://fosdem.org/2026/schedule/event/CVGZAV-willow/
1•todsacerdoti•3m ago•0 comments

Feedback on a client-side, privacy-first PDF editor I built

https://pdffreeeditor.com/
1•Maaz-Sohail•7m ago•0 comments

Clay Christensen's Milkshake Marketing (2011)

https://www.library.hbs.edu/working-knowledge/clay-christensens-milkshake-marketing
2•vismit2000•13m ago•0 comments

Show HN: WeaveMind – AI Workflows with human-in-the-loop

https://weavemind.ai
4•quentin101010•19m ago•1 comments

Show HN: Seedream 5.0: free AI image generator that claims strong text rendering

https://seedream5ai.org
1•dallen97•21m ago•0 comments

A contributor trust management system based on explicit vouches

https://github.com/mitchellh/vouch
2•admp•23m ago•1 comments

Show HN: Analyzing 9 years of HN side projects that reached $500/month

2•haileyzhou•23m ago•0 comments

The Floating Dock for Developers

https://snap-dock.co
2•OsamaJaber•24m ago•0 comments

Arcan Explained – A browser for different webs

https://arcan-fe.com/2026/01/26/arcan-explained-a-browser-for-different-webs/
2•walterbell•25m ago•0 comments

We are not scared of AI, we are scared of irrelevance

https://adlrocha.substack.com/p/adlrocha-we-are-not-scared-of-ai
1•adlrocha•26m ago•0 comments

Quartz Crystals

https://www.pa3fwm.nl/technotes/tn13a.html
1•gtsnexp•29m ago•0 comments

Show HN: I built a free dictionary API to avoid API keys

https://github.com/suvankar-mitra/free-dictionary-rest-api
2•suvankar_m•31m ago•0 comments

Show HN: Kybera – Agentic Smart Wallet with AI Osint and Reputation Tracking

https://kybera.xyz
2•xipz•33m ago•0 comments

Show HN: brew changelog – find upstream changelogs for Homebrew packages

https://github.com/pavel-voronin/homebrew-changelog
1•kolpaque•36m ago•0 comments

Any chess position with 8 pieces on board and one pair of pawns has been solved

https://mastodon.online/@lichess/116029914921844500
2•baruchel•38m ago•1 comments

LLMs as Language Compilers: Lessons from Fortran for the Future of Coding

https://cyber-omelette.com/posts/the-abstraction-rises.html
2•birdculture•40m ago•0 comments

Projecting high-dimensional tensor/matrix/vect GPT–>ML

https://github.com/tambetvali/LaegnaAIHDvisualization
1•tvali•41m ago•1 comments

Show HN: Free Bank Statement Analyzer to Find Spending Leaks and Save Money

https://www.whereismymoneygo.com/
2•raleobob•44m ago•1 comments

Our Stolen Light

https://ayushgundawar.me/posts/html/our_stolen_light.html
2•gundawar•45m ago•0 comments

Matchlock: Linux-based sandboxing for AI agents

https://github.com/jingkaihe/matchlock
2•jingkai_he•48m ago•0 comments

Show HN: A2A Protocol – Infrastructure for an Agent-to-Agent Economy

2•swimmingkiim•52m ago•1 comments

Drinking More Water Can Boost Your Energy

https://www.verywellhealth.com/can-drinking-water-boost-energy-11891522
1•wjb3•55m ago•0 comments

Proving Laderman's 3x3 Matrix Multiplication Is Locally Optimal via SMT Solvers

https://zenodo.org/records/18514533
1•DarenWatson•57m ago•0 comments

Fire may have altered human DNA

https://www.popsci.com/science/fire-alter-human-dna/
4•wjb3•58m ago•2 comments

"Compiled" Specs

https://deepclause.substack.com/p/compiled-specs
1•schmuhblaster•1h ago•0 comments

The Next Big Language (2007) by Steve Yegge

https://steve-yegge.blogspot.com/2007/02/next-big-language.html?2026
1•cryptoz•1h ago•0 comments

Open-Weight Models Are Getting Serious: GLM 4.7 vs. MiniMax M2.1

https://blog.kilo.ai/p/open-weight-models-are-getting-serious
4•ms7892•1h ago•0 comments

Using AI for Code Reviews: What Works, What Doesn't, and Why

https://entelligence.ai/blogs/entelligence-ai-in-cli
3•Arindam1729•1h ago•0 comments

Show HN: Solnix – an early-stage experimental programming language

https://www.solnix-lang.org/
4•maheshbhatiya•1h ago•0 comments
Open in hackernews

Ask HN: How has your company adapted to hiring with LLMs?

3•Python3267•7mo ago
Now that llm's are starting to get pretty good how has your company's adapted to the new environment. It's no longer good enough to see if someone's good a programing, instead we need to screen if someone is good at engineering. In my experience Software Engineering is starting to mature like other forms of engineering. Mechanical Engineers don't mill out their parts (Well they should at least a couple of times to understand the constraints of machining). SWE's need to see if the code is "good" (Mech E's test their parts) and then design the systems around them. As far as I can see there are two ways of going forwards.

1. Only do on sites and eat the travel expenses

2. Test for systems design and culture fit

On sites allow for a level playing field where interviewees don't need to compete for the [best person hiding their llm use](https://www.reddit.com/r/technology/comments/1j436it/a_student_used_ai_to_beat_amazons_brutal/).

What are people's thoughts?

Comments

codingdave•7mo ago
> It's no longer good enough to see if someone's good a programing, instead we need to screen if someone is good at engineering.

That has been true for many years. That is why we don't just ask FizzBuzz and hire people who can do it. Your ideas of the additional depth that is needed are 100% correct... but they aren't new since LLMs came out. They express the same depth that we've been interviewing for all along.

Python3267•7mo ago
I guess what I'm stabbing at is that the FAANG interviews I've done and my friends who work there operate with that mindset. You do need to ask questions to answer the problem in those interviews but they heavily rely on interviewee's code.
paulcole•7mo ago
Are you in the US and remote?

If so, don't even worry about it.

You'll never outsmart people who want to cut corners and beat a system. In fact hire the smartest lazy people you can find. Let them use LLMs at work and fire the ones who can't cut it.

Python3267•7mo ago
Agreed, but the problem is a lot of companies don't ask questions that screen for people who can build longer term systems that are extendable.
austin-cheney•7mo ago
1. Interview candidates with cameras on.

2. Do not ask basic software literacy questions. First of all, this was completely stupid even before LLMs. Secondly, its easy to cheat. If you absolutely have to do this then do it terms of measures. Most people in software are entirely incapable of measuring anything and LLMs cannot fix their personality deficiency.

3. Ask all questions where the expected answer is a not some factoid nonsense but a decision they must make. Evaluate their answer on the grounds of risk, coverage, delivery, and performance. For example if you are interviewing a AI/ML guy ask them about how they overcome bias in the algorithms and how they weigh the consequences of different design outcomes. If they are a QA ask them about how they will take ownership of quality analysis for work already in production or how they will coach developers when communicating steps to reproduce a defect.

4. As an interviewer you should know, by now, how to listen to people. That is so much more than just audible parsing of words. If their words say one thing, but their body language says something different then they are full of shit. Its okay that they aren't experts in everything. Their honesty and humility is far more important. They can get every question wrong, but if their honesty is on and they can make solid decisions then they are at least in the top half of consideration.

5. Finally, after evaluating their decision making ability and risk analysis then ask them for a story where they have encountered such a problem in the past and had to learn from failure.

fazlerocks•7mo ago
we've shifted to focusing way more on problem-solving ability during interviews rather than just coding skills

still do technical screens but now we give people access to AI tools during the process - because that's how they'll actually work. want to see how they break down problems, ask the right questions, and iterate on solutions

honestly the candidates who can effectively use AI to solve complex problems are often better hires than people who can code from scratch but struggle with ambiguous requirements

the key is testing for engineering thinking, not just programming syntax

mateo_wendler•7mo ago
I think that if generative AI will soon write flawless code for us, we must stop “testing for coding skills” entirely and instead evaluate candidates on algorithmic complexity reasoning/optimizing, scalable system design, security threat modeling, cultural alignment, teamwork aptitude, and leadership potential

A post-neuralink world will be harder to asess, though.