frontpage.
newsnewestaskshowjobs

Made with ♥ by @iamnishanth

Open Source @Github

fp.

Show HN: 26/02/26 – 5 songs in a day

https://playingwith.variousbits.net/saturday
1•dmje•53s ago•0 comments

Toroidal Logit Bias – Reduce LLM hallucinations 40% with no fine-tuning

https://github.com/Paraxiom/topological-coherence
1•slye514•3m ago•1 comments

Top AI models fail at >96% of tasks

https://www.zdnet.com/article/ai-failed-test-on-remote-freelance-jobs/
3•codexon•3m ago•1 comments

The Science of the Perfect Second (2023)

https://harpers.org/archive/2023/04/the-science-of-the-perfect-second/
1•NaOH•4m ago•0 comments

Bob Beck (OpenBSD) on why vi should stay vi (2006)

https://marc.info/?l=openbsd-misc&m=115820462402673&w=2
2•birdculture•7m ago•0 comments

Show HN: a glimpse into the future of eye tracking for multi-agent use

https://github.com/dchrty/glimpsh
1•dochrty•8m ago•0 comments

The Optima-l Situation: A deep dive into the classic humanist sans-serif

https://micahblachman.beehiiv.com/p/the-optima-l-situation
2•subdomain•9m ago•0 comments

Barn Owls Know When to Wait

https://blog.typeobject.com/posts/2026-barn-owls-know-when-to-wait/
1•fintler•9m ago•0 comments

Implementing TCP Echo Server in Rust [video]

https://www.youtube.com/watch?v=qjOBZ_Xzuio
1•sheerluck•9m ago•0 comments

LicGen – Offline License Generator (CLI and Web UI)

1•tejavvo•12m ago•0 comments

Service Degradation in West US Region

https://azure.status.microsoft/en-gb/status?gsid=5616bb85-f380-4a04-85ed-95674eec3d87&utm_source=...
2•_____k•12m ago•0 comments

The Janitor on Mars

https://www.newyorker.com/magazine/1998/10/26/the-janitor-on-mars
1•evo_9•14m ago•0 comments

Bringing Polars to .NET

https://github.com/ErrorLSC/Polars.NET
3•CurtHagenlocher•16m ago•0 comments

Adventures in Guix Packaging

https://nemin.hu/guix-packaging.html
1•todsacerdoti•17m ago•0 comments

Show HN: We had 20 Claude terminals open, so we built Orcha

1•buildingwdavid•18m ago•0 comments

Your Best Thinking Is Wasted on the Wrong Decisions

https://www.iankduncan.com/engineering/2026-02-07-your-best-thinking-is-wasted-on-the-wrong-decis...
1•iand675•18m ago•0 comments

Warcraftcn/UI – UI component library inspired by classic Warcraft III aesthetics

https://www.warcraftcn.com/
1•vyrotek•19m ago•0 comments

Trump Vodka Becomes Available for Pre-Orders

https://www.forbes.com/sites/kirkogunrinde/2025/12/01/trump-vodka-becomes-available-for-pre-order...
1•stopbulying•20m ago•0 comments

Velocity of Money

https://en.wikipedia.org/wiki/Velocity_of_money
1•gurjeet•23m ago•0 comments

Stop building automations. Start running your business

https://www.fluxtopus.com/automate-your-business
1•valboa•27m ago•1 comments

You can't QA your way to the frontier

https://www.scorecard.io/blog/you-cant-qa-your-way-to-the-frontier
1•gk1•28m ago•0 comments

Show HN: PalettePoint – AI color palette generator from text or images

https://palettepoint.com
1•latentio•29m ago•0 comments

Robust and Interactable World Models in Computer Vision [video]

https://www.youtube.com/watch?v=9B4kkaGOozA
2•Anon84•32m ago•0 comments

Nestlé couldn't crack Japan's coffee market.Then they hired a child psychologist

https://twitter.com/BigBrainMkting/status/2019792335509541220
1•rmason•34m ago•1 comments

Notes for February 2-7

https://taoofmac.com/space/notes/2026/02/07/2000
2•rcarmo•35m ago•0 comments

Study confirms experience beats youthful enthusiasm

https://www.theregister.com/2026/02/07/boomers_vs_zoomers_workplace/
2•Willingham•42m ago•0 comments

The Big Hunger by Walter J Miller, Jr. (1952)

https://lauriepenny.substack.com/p/the-big-hunger
2•shervinafshar•43m ago•0 comments

The Genus Amanita

https://www.mushroomexpert.com/amanita.html
1•rolph•48m ago•0 comments

We have broken SHA-1 in practice

https://shattered.io/
10•mooreds•49m ago•4 comments

Ask HN: Was my first management job bad, or is this what management is like?

1•Buttons840•50m ago•0 comments
Open in hackernews

Ask HN: Typical tech job interview in late 2025?

2•dakiol•2mo ago
A year ago or so, I went through the "classic" tech interview. Not faang, but not an unknown company either, so something one or two levels below faang. Good pay, lots of senior+ engineers, and a tough environment where you can’t really slack off (they call it "challenging").The process was the usual:

* Intro call with a recruiter to get to know you and all that crap

* Live coding or a take-home assignment (plus a follow up to explain the code). No AI or Googling allowed

* System design interview. Again, no AI or Googling

* Interview with an engineering manager. Behavioral interview questions, same rule: no AI

* Team/culture fit

Now I’m wondering how interviews look today. I use LLMs for about 50–70% of my projects (mostly greenfield ones), and they’ve become just another tool in my workflow; like CI/CD, testing, datadog, or debuggers, to name a few. So I’m not sure if I should prepare for interviews like before, or start integrating LLMs into my prep.

It feels odd to imagine a live coding interview with an LLM tbh, where I’d have to pretend to think through the problem while basically guiding the model toward the solution. In practice, my process is more trial and error, almost brute force, but it works nice, kind of like sculpting stone. Like I don't think anyone would judge too harsh on the way you use debuggers, as long as you get the job done... I have the same feeling about how one uses LLMs, but since they are relatively new, I guess one needs to fake how the usage goes (so that one passes the interview).

Thoughts?

Comments

luponius•2mo ago
Had interviews last year insisting the use of llms and others tolerating it. Our head wants to introduce codex in ohr workflows now so pretending you're not using them or joining a team that swears off them better have a very good reason I suppose?
dakiol•2mo ago
Yeah exactly. I'm using codex, btw. So I feel weird to pretend I'm not using LLMs and I write all code just by using my brain. But on the other side, there's no much point on explaining one-self on how LLMs are used to do a task... like, it would look very ridiculous to share my screen and ask 90% of the solution to the LLM while the interviewer just looks at LLM output... that's like analyzing how one uses Google to search for stuff (and I swear that 100% of the engineers out there use Google to search for stuff related to coding, but I haven't heard of any tech interview that includes a session to asses your Google skills, right?)

So, if we are not pretending, and companies want people who can use LLMs, well, I think it's rather clear: No more live coding interviews, no more live system design interviews. You can just send take-home assignments because people WILL use LLMs to solve them. You just analyze the best solution offline and take the best.

If any the only "live" interview needed is: are-you-a-real-person-and-not-an-asshole?

rekabis•2mo ago
Every corporation I have interfaced with over the last few months has demonstrated massively epic levels of FOMO over AI.

And yet, when I ask them how they are tracking AI’s effectiveness, especially with regard to degrading skill sets, lowered creativity/effectiveness in solutions to complex/edge problems, slowed dev velocity, and increasing levels of needless code complexity (with associated ballooning of LoC) and gratuitous hallucinations breeding bugs like meal worm farms, all I get are crickets. Or worse, deer-in-the-headlight looks. They’re all wildly unaware of the downsides that are slowly being confirmed by science.

Frankly, I feel that I am lucky that I’ve chosen a sabbatical to deal with my parent’s EoL issues. The chance that this will extend into the popping of the AI bubble appears to be non-trivial. By the time I start looking in earnest again, AI might not be a critical employment benchmark anymore.

Or one of my projects will become profitable and I won’t have to deal with all that bullshite.

akshaykokane•2mo ago
I have been seeing the same thing. Teams are confused right now because interviews still measure 2015-era skills, while day-to-day work requires 2025-era AI collaboration skills. Most companies either ignore LLM usage completely or try to forbid it in interviews, even though developers will use it 50%+ of the time on the job.

One interesting direction I’ve been exploring is evaluating candidates on how they think with AI, not whether they avoid it. Things like debugging AI-generated code, verifying assumptions, identifying hallucinations, choosing when to trust the model, etc. These are the actual bottlenecks today, not LeetCode puzzles.

We built an internal tool that looks at this “cognitive intelligence” part instead of raw coding speed, and the signals have been much more predictive than traditional interviews. I feel like more companies will eventually move toward this kind of evaluation because banning AI in interviews makes less sense every day.