frontpage.
newsnewestaskshowjobs

Made with ♥ by @iamnishanth

Open Source @Github

fp.

Atlas Airborne (Boston Dynamics and RAI Institute) [video]

https://www.youtube.com/watch?v=UNorxwlZlFk
1•lysace•1m ago•0 comments

Zen Tools

http://postmake.io/zen-list
1•Malfunction92•3m ago•0 comments

Is the Detachment in the Room? – Agents, Cruelty, and Empathy

https://hailey.at/posts/3mear2n7v3k2r
1•carnevalem•3m ago•0 comments

The purpose of Continuous Integration is to fail

https://blog.nix-ci.com/post/2026-02-05_the-purpose-of-ci-is-to-fail
1•zdw•5m ago•0 comments

Apfelstrudel: Live coding music environment with AI agent chat

https://github.com/rcarmo/apfelstrudel
1•rcarmo•6m ago•0 comments

What Is Stoicism?

https://stoacentral.com/guides/what-is-stoicism
3•0xmattf•7m ago•0 comments

What happens when a neighborhood is built around a farm

https://grist.org/cities/what-happens-when-a-neighborhood-is-built-around-a-farm/
1•Brajeshwar•7m ago•0 comments

Every major galaxy is speeding away from the Milky Way, except one

https://www.livescience.com/space/cosmology/every-major-galaxy-is-speeding-away-from-the-milky-wa...
2•Brajeshwar•7m ago•0 comments

Extreme Inequality Presages the Revolt Against It

https://www.noemamag.com/extreme-inequality-presages-the-revolt-against-it/
2•Brajeshwar•7m ago•0 comments

There's no such thing as "tech" (Ten years later)

1•dtjb•8m ago•0 comments

What Really Killed Flash Player: A Six-Year Campaign of Deliberate Platform Work

https://medium.com/@aglaforge/what-really-killed-flash-player-a-six-year-campaign-of-deliberate-p...
1•jbegley•9m ago•0 comments

Ask HN: Anyone orchestrating multiple AI coding agents in parallel?

1•buildingwdavid•10m ago•0 comments

Show HN: Knowledge-Bank

https://github.com/gabrywu-public/knowledge-bank
1•gabrywu•16m ago•0 comments

Show HN: The Codeverse Hub Linux

https://github.com/TheCodeVerseHub/CodeVerseLinuxDistro
3•sinisterMage•17m ago•2 comments

Take a trip to Japan's Dododo Land, the most irritating place on Earth

https://soranews24.com/2026/02/07/take-a-trip-to-japans-dododo-land-the-most-irritating-place-on-...
2•zdw•17m ago•0 comments

British drivers over 70 to face eye tests every three years

https://www.bbc.com/news/articles/c205nxy0p31o
18•bookofjoe•17m ago•7 comments

BookTalk: A Reading Companion That Captures Your Voice

https://github.com/bramses/BookTalk
1•_bramses•18m ago•0 comments

Is AI "good" yet? – tracking HN's sentiment on AI coding

https://www.is-ai-good-yet.com/#home
3•ilyaizen•19m ago•1 comments

Show HN: Amdb – Tree-sitter based memory for AI agents (Rust)

https://github.com/BETAER-08/amdb
1•try_betaer•20m ago•0 comments

OpenClaw Partners with VirusTotal for Skill Security

https://openclaw.ai/blog/virustotal-partnership
2•anhxuan•20m ago•0 comments

Show HN: Seedance 2.0 Release

https://seedancy2.com/
2•funnycoding•20m ago•0 comments

Leisure Suit Larry's Al Lowe on model trains, funny deaths and Disney

https://spillhistorie.no/2026/02/06/interview-with-sierra-veteran-al-lowe/
1•thelok•20m ago•0 comments

Towards Self-Driving Codebases

https://cursor.com/blog/self-driving-codebases
1•edwinarbus•21m ago•0 comments

VCF West: Whirlwind Software Restoration – Guy Fedorkow [video]

https://www.youtube.com/watch?v=YLoXodz1N9A
1•stmw•22m ago•1 comments

Show HN: COGext – A minimalist, open-source system monitor for Chrome (<550KB)

https://github.com/tchoa91/cog-ext
1•tchoa91•22m ago•1 comments

FOSDEM 26 – My Hallway Track Takeaways

https://sluongng.substack.com/p/fosdem-26-my-hallway-track-takeaways
1•birdculture•23m ago•0 comments

Show HN: Env-shelf – Open-source desktop app to manage .env files

https://env-shelf.vercel.app/
1•ivanglpz•27m ago•0 comments

Show HN: Almostnode – Run Node.js, Next.js, and Express in the Browser

https://almostnode.dev/
1•PetrBrzyBrzek•27m ago•0 comments

Dell support (and hardware) is so bad, I almost sued them

https://blog.joshattic.us/posts/2026-02-07-dell-support-lawsuit
1•radeeyate•28m ago•0 comments

Project Pterodactyl: Incremental Architecture

https://www.jonmsterling.com/01K7/
1•matt_d•28m ago•0 comments
Open in hackernews

Ask HN: Why does job search feel so unclear even for strong candidates?

4•Signatura•4w ago
For years, job search has been framed as a personal performance problem. If someone struggles to move forward, the assumption is usually missing skills, weak experience, or a poorly written CV. But after going through the process ourselves, we reached a different conclusion: the system itself is unclear.

The people behind this post have each been through multiple job searches where progress felt reactive rather than intentional. Even with solid backgrounds, the process often lacked structure, visibility, and meaningful feedback. Decisions were made with limited information, and effort did not always translate into learning.

One recurring issue was the absence of feedback loops. CVs were edited repeatedly without understanding what actually improved outcomes. Interviews were prepared for without clarity on how candidates were perceived. Rejections arrived without explanation, leaving people to change direction blindly.

Over time, it became apparent that job seekers are asked to make high-stakes decisions with almost no structure. What should be changed and why? Which signals matter at each stage? How do you distinguish between a positioning issue, a communication issue, or a simple lack of fit?

Most tools address isolated moments in the process. A CV template here. Interview tips there. But the job search itself remains fragmented, with no clear way to connect actions to outcomes.

This raises a broader question. What would it look like if job search were treated as a system rather than a set of disconnected tasks? Something with feedback, structure, and visibility, instead of guesswork and repetition.

Treating job search as a system rather than a series of disconnected tasks may be one of the most overlooked opportunities in how careers are navigated today.

Curious how others here think about this - where does the job search process break down most for you?

Comments

Signatura•4w ago
I’m one of the co-founders and went through this process myself. Not promoting anything here - genuinely interested in how others experience this and what helped create clarity.
winshaurya9•4w ago
searching job myself , how can i stand out different from the ultra showy candidates launching b2b vertical saas , when i have simpler project that solved problem for a smaller group of people that i build from scratch , every controller , every api , every fallback , making everything serverless for free deployment and decrease server load and have real interest to solve problem and giving hours into my art , without internal reach it seems hard to break into the industry , mind you i am still in college and confused that will i even stand a chance
Signatura•4w ago
I don’t think the gap you’re describing is about quality of work as much as how it gets interpreted.

What you described, building something end to end, making real tradeoffs, and caring about the problem is exactly the kind of signal people say they want, but it doesn’t always map cleanly to how hiring filters operate.

Being early in your career makes that mismatch louder, not smaller. Without context, depth can look like “small” and polish can look like “impact”. One thing that might help is making the reasoning behind your choices visible, not just the output.

When reviewers can see why you built things the way you did, it becomes easier to compare substance to surface. It’s normal to feel unsure at this stage, but from the outside, what you’re describing sounds like a real foundation, not a disadvantage. I wish you all the best!

sinenomine•4w ago
Monetary policy, software tax, post-covid hiring glut, pervasive mental health issues in HR professionals. For older pros there is also age discrimination. There is also underestimated factor of hiring by committee which more and more commonly disguises ethnic nepotism in hiring decisions.
Signatura•4w ago
I think that’s a fair list, and it highlights how much of the process sits outside the candidate’s control.

Macro forces, internal incentives, and human bias all stack on top of each other, and the candidate only sees the outcome, not the cause. What feels particularly hard is that all of these factors collapse into a single signal for the job seeker, a rejection with no explanation.

From your perspective, which of these has the biggest impact in practice, and which ones do you think are most invisible to candidates going through the process?

austin-cheney•4w ago
2 reasons

1. Poor signaling. There is a bunch of noise in both job requirements and resumes.

2. Unclear goals. Many technical job postings are not clear in what they want. This is not really the fault of the employer but more of an industry failure to identify qualifications.

As a result you get super talented people that cannot find work and simultaneously grossly unqualified people who easily find work that is substantially over paid for the expected level of delivery and responsibilities.

Signatura•4w ago
Austin, that makes sense. The signaling problem cuts both ways: Resumes try to compress complex ability into keywords, and job descriptions try to describe real work with abstract labels. A lot gets lost in between.

The unclear goals point is important too. When a role isn’t well-defined, hiring ends up optimizing for proxies rather than outcomes. Do you think this is mostly a language problem (how roles and experience are described), or a structural one where teams don’t actually agree internally on what success in the role looks like?

austin-cheney•3w ago
My experience tells me it is an expectation problem coupled against missing standards/baselines.

Most employers need a person in the seat doing the work and will lower their preferences to find enough candidates for a selection. Government does not do that. If candidates fail to meet the requirements for a government contract the seat just remains empty.

Consider how engineering works. An engineers resume will just list employment history, education, and awards. There is no need to fluff things up because engineers are required to have a license(s) and that demonstrates qualification. Software does not have that, so people have to explain their capabilities over and over.

Signatura•3w ago
That’s an interesting comparison... The licensing point highlights how much of the burden in software hiring sits on explanation rather than verification. Without shared baselines, candidates end up narrating their competence instead of pointing to an accepted signal. The expectation gap you describe also explains why requirements feel flexible in practice but rigid on paper. When the real goal is “get someone productive soon,” standards tend to bend quietly rather than evolve explicitly.

Do you think the absence of clear baselines is something the industry could realistically converge on, or is software work too varied for that to work in the way it does for licensed engineering?

austin-cheney•3w ago
Programming is writing logic, which is a universal quality. So the way I would do is to create a fictional programming language, provide some familiarity and training time immediately before a licensing exam (at the testing location), and then having the candidate solve real problems using the fictional language for the licensing exam. It tests for the ability to deliver solutions more than memorizing patterns or reproducing familiar conventions. Too many developers cannot write original logic.

Then there could be additional specialized qualifications above the base qualification, for example: security/certificates/cryptography, experimentation, execution performance, transmission/API management.

btrettel•4w ago
I think a big part of the problem is an overly narrow view of what a qualified candidate looks like from the hiring side. Tons of qualified people are rejected because they don't look qualified to the people hiring.

For example, recently a friend had an interview and the guy interviewing him seemed disappointed that my friend didn't have experience solving a problem in a particular way as if that were the only way to solve that problem. In my opinion, the way the interviewer solves that problem is inefficient. But they didn't seem to see any other way.

(Yes, a candidate can communicate their abilities better. But in my experience, this only goes so far, and the people hiring need to make more effort.)

A better process would be more open-minded and test itself by interviewing candidates who the interviewer thinks are bad. In science there's an idea called negative testing. If a test is supposed to separate good from bad, you can't just check what the test says is good, you also need to check what the test says is bad. If good things are marked as bad by the test, something's wrong with the test. If I were hiring, I'd probably start by filtering out people who don't meet very basic requirements and have some fairly open-ended interviews early with randomly selected people (who pass the initial screening) to refine the hiring process and help me realize gaps in my understanding.

Signatura•4w ago
I agree with this. What stands out to me is that the hiring process often treats one internal mental model as “correct”, and anything outside of it as a flaw in the candidate.

The example you gave about solving the same problem differently is common; different approaches get mistaken for lack of competence.

I like the negative testing idea a lot. If a hiring process never examines who it’s rejecting, it has no way to know whether it’s filtering quality or just filtering familiarity.

Have you seen teams actually test or evolve their hiring criteria this way, or does it usually stay fixed once defined?

btrettel•3w ago
> Have you seen teams actually test or evolve their hiring criteria this way, or does it usually stay fixed once defined?

I'm sure many folks hiring do iteratively improve their hiring criteria, though I'm skeptical of how rigorous their process is. For all I know they could make their hiring criteria worse over time! I have never been involved in a hiring decision, so what I write is from the perspective of a job candidate.

Signatura•3w ago
That makes sense, and I think your skepticism is reasonable.

From the candidate side, it’s almost impossible to tell whether criteria are being refined thoughtfully or just drifting based on recent hires or strong opinions in the room.

What strikes me is that without explicit feedback loops, iteration can easily turn into reinforcement, people conclude “this worked” without ever seeing the counterfactual of who was filtered out.

From the outside, it often looks less like a calibrated process and more like accumulated intuition. I’m curious whether that matches what others here have seen from the inside.