frontpage.
newsnewestaskshowjobs

Made with ♥ by @iamnishanth

Open Source @Github

fp.

Brute Force Colors (2022)

https://arnaud-carre.github.io/2022-12-30-amiga-ham/
1•erickhill•2m ago•0 comments

Google Translate apparently vulnerable to prompt injection

https://www.lesswrong.com/posts/tAh2keDNEEHMXvLvz/prompt-injection-in-google-translate-reveals-ba...
1•julkali•2m ago•0 comments

(Bsky thread) "This turns the maintainer into an unwitting vibe coder"

https://bsky.app/profile/fullmoon.id/post/3meadfaulhk2s
1•todsacerdoti•3m ago•0 comments

Software development is undergoing a Renaissance in front of our eyes

https://twitter.com/gdb/status/2019566641491963946
1•tosh•3m ago•0 comments

Can you beat ensloppification? I made a quiz for Wikipedia's Signs of AI Writing

https://tryward.app/aiquiz
1•bennydog224•4m ago•1 comments

Spec-Driven Design with Kiro: Lessons from Seddle

https://medium.com/@dustin_44710/spec-driven-design-with-kiro-lessons-from-seddle-9320ef18a61f
1•nslog•4m ago•0 comments

Agents need good developer experience too

https://modal.com/blog/agents-devex
1•birdculture•6m ago•0 comments

The Dark Factory

https://twitter.com/i/status/2020161285376082326
1•Ozzie_osman•6m ago•0 comments

Free data transfer out to internet when moving out of AWS (2024)

https://aws.amazon.com/blogs/aws/free-data-transfer-out-to-internet-when-moving-out-of-aws/
1•tosh•7m ago•0 comments

Interop 2025: A Year of Convergence

https://webkit.org/blog/17808/interop-2025-review/
1•alwillis•8m ago•0 comments

Prejudice Against Leprosy

https://text.npr.org/g-s1-108321
1•hi41•9m ago•0 comments

Slint: Cross Platform UI Library

https://slint.dev/
1•Palmik•13m ago•0 comments

AI and Education: Generative AI and the Future of Critical Thinking

https://www.youtube.com/watch?v=k7PvscqGD24
1•nyc111•13m ago•0 comments

Maple Mono: Smooth your coding flow

https://font.subf.dev/en/
1•signa11•14m ago•0 comments

Moltbook isn't real but it can still hurt you

https://12gramsofcarbon.com/p/tech-things-moltbook-isnt-real-but
1•theahura•18m ago•0 comments

Take Back the Em Dash–and Your Voice

https://spin.atomicobject.com/take-back-em-dash/
1•ingve•18m ago•0 comments

Show HN: 289x speedup over MLP using Spectral Graphs

https://zenodo.org/login/?next=%2Fme%2Fuploads%3Fq%3D%26f%3Dshared_with_me%25253Afalse%26l%3Dlist...
1•andrespi•19m ago•0 comments

Teaching Mathematics

https://www.karlin.mff.cuni.cz/~spurny/doc/articles/arnold.htm
2•samuel246•22m ago•0 comments

3D Printed Microfluidic Multiplexing [video]

https://www.youtube.com/watch?v=VZ2ZcOzLnGg
2•downboots•22m ago•0 comments

Abstractions Are in the Eye of the Beholder

https://software.rajivprab.com/2019/08/29/abstractions-are-in-the-eye-of-the-beholder/
2•whack•22m ago•0 comments

Show HN: Routed Attention – 75-99% savings by routing between O(N) and O(N²)

https://zenodo.org/records/18518956
1•MikeBee•22m ago•0 comments

We didn't ask for this internet – Ezra Klein show [video]

https://www.youtube.com/shorts/ve02F0gyfjY
1•softwaredoug•23m ago•0 comments

The Real AI Talent War Is for Plumbers and Electricians

https://www.wired.com/story/why-there-arent-enough-electricians-and-plumbers-to-build-ai-data-cen...
2•geox•26m ago•0 comments

Show HN: MimiClaw, OpenClaw(Clawdbot)on $5 Chips

https://github.com/memovai/mimiclaw
1•ssslvky1•26m ago•0 comments

I Maintain My Blog in the Age of Agents

https://www.jerpint.io/blog/2026-02-07-how-i-maintain-my-blog-in-the-age-of-agents/
3•jerpint•26m ago•0 comments

The Fall of the Nerds

https://www.noahpinion.blog/p/the-fall-of-the-nerds
1•otoolep•28m ago•0 comments

Show HN: I'm 15 and built a free tool for reading ancient texts.

https://the-lexicon-project.netlify.app/
5•breadwithjam•31m ago•2 comments

How close is AI to taking my job?

https://epoch.ai/gradient-updates/how-close-is-ai-to-taking-my-job
1•cjbarber•31m ago•0 comments

You are the reason I am not reviewing this PR

https://github.com/NixOS/nixpkgs/pull/479442
2•midzer•33m ago•1 comments

Show HN: FamilyMemories.video – Turn static old photos into 5s AI videos

https://familymemories.video
1•tareq_•35m ago•0 comments
Open in hackernews

Ask HN: Why does job search feel so unclear even for strong candidates?

4•Signatura•4w ago
For years, job search has been framed as a personal performance problem. If someone struggles to move forward, the assumption is usually missing skills, weak experience, or a poorly written CV. But after going through the process ourselves, we reached a different conclusion: the system itself is unclear.

The people behind this post have each been through multiple job searches where progress felt reactive rather than intentional. Even with solid backgrounds, the process often lacked structure, visibility, and meaningful feedback. Decisions were made with limited information, and effort did not always translate into learning.

One recurring issue was the absence of feedback loops. CVs were edited repeatedly without understanding what actually improved outcomes. Interviews were prepared for without clarity on how candidates were perceived. Rejections arrived without explanation, leaving people to change direction blindly.

Over time, it became apparent that job seekers are asked to make high-stakes decisions with almost no structure. What should be changed and why? Which signals matter at each stage? How do you distinguish between a positioning issue, a communication issue, or a simple lack of fit?

Most tools address isolated moments in the process. A CV template here. Interview tips there. But the job search itself remains fragmented, with no clear way to connect actions to outcomes.

This raises a broader question. What would it look like if job search were treated as a system rather than a set of disconnected tasks? Something with feedback, structure, and visibility, instead of guesswork and repetition.

Treating job search as a system rather than a series of disconnected tasks may be one of the most overlooked opportunities in how careers are navigated today.

Curious how others here think about this - where does the job search process break down most for you?

Comments

Signatura•4w ago
I’m one of the co-founders and went through this process myself. Not promoting anything here - genuinely interested in how others experience this and what helped create clarity.
winshaurya9•4w ago
searching job myself , how can i stand out different from the ultra showy candidates launching b2b vertical saas , when i have simpler project that solved problem for a smaller group of people that i build from scratch , every controller , every api , every fallback , making everything serverless for free deployment and decrease server load and have real interest to solve problem and giving hours into my art , without internal reach it seems hard to break into the industry , mind you i am still in college and confused that will i even stand a chance
Signatura•4w ago
I don’t think the gap you’re describing is about quality of work as much as how it gets interpreted.

What you described, building something end to end, making real tradeoffs, and caring about the problem is exactly the kind of signal people say they want, but it doesn’t always map cleanly to how hiring filters operate.

Being early in your career makes that mismatch louder, not smaller. Without context, depth can look like “small” and polish can look like “impact”. One thing that might help is making the reasoning behind your choices visible, not just the output.

When reviewers can see why you built things the way you did, it becomes easier to compare substance to surface. It’s normal to feel unsure at this stage, but from the outside, what you’re describing sounds like a real foundation, not a disadvantage. I wish you all the best!

sinenomine•4w ago
Monetary policy, software tax, post-covid hiring glut, pervasive mental health issues in HR professionals. For older pros there is also age discrimination. There is also underestimated factor of hiring by committee which more and more commonly disguises ethnic nepotism in hiring decisions.
Signatura•4w ago
I think that’s a fair list, and it highlights how much of the process sits outside the candidate’s control.

Macro forces, internal incentives, and human bias all stack on top of each other, and the candidate only sees the outcome, not the cause. What feels particularly hard is that all of these factors collapse into a single signal for the job seeker, a rejection with no explanation.

From your perspective, which of these has the biggest impact in practice, and which ones do you think are most invisible to candidates going through the process?

austin-cheney•4w ago
2 reasons

1. Poor signaling. There is a bunch of noise in both job requirements and resumes.

2. Unclear goals. Many technical job postings are not clear in what they want. This is not really the fault of the employer but more of an industry failure to identify qualifications.

As a result you get super talented people that cannot find work and simultaneously grossly unqualified people who easily find work that is substantially over paid for the expected level of delivery and responsibilities.

Signatura•4w ago
Austin, that makes sense. The signaling problem cuts both ways: Resumes try to compress complex ability into keywords, and job descriptions try to describe real work with abstract labels. A lot gets lost in between.

The unclear goals point is important too. When a role isn’t well-defined, hiring ends up optimizing for proxies rather than outcomes. Do you think this is mostly a language problem (how roles and experience are described), or a structural one where teams don’t actually agree internally on what success in the role looks like?

austin-cheney•4w ago
My experience tells me it is an expectation problem coupled against missing standards/baselines.

Most employers need a person in the seat doing the work and will lower their preferences to find enough candidates for a selection. Government does not do that. If candidates fail to meet the requirements for a government contract the seat just remains empty.

Consider how engineering works. An engineers resume will just list employment history, education, and awards. There is no need to fluff things up because engineers are required to have a license(s) and that demonstrates qualification. Software does not have that, so people have to explain their capabilities over and over.

Signatura•3w ago
That’s an interesting comparison... The licensing point highlights how much of the burden in software hiring sits on explanation rather than verification. Without shared baselines, candidates end up narrating their competence instead of pointing to an accepted signal. The expectation gap you describe also explains why requirements feel flexible in practice but rigid on paper. When the real goal is “get someone productive soon,” standards tend to bend quietly rather than evolve explicitly.

Do you think the absence of clear baselines is something the industry could realistically converge on, or is software work too varied for that to work in the way it does for licensed engineering?

austin-cheney•3w ago
Programming is writing logic, which is a universal quality. So the way I would do is to create a fictional programming language, provide some familiarity and training time immediately before a licensing exam (at the testing location), and then having the candidate solve real problems using the fictional language for the licensing exam. It tests for the ability to deliver solutions more than memorizing patterns or reproducing familiar conventions. Too many developers cannot write original logic.

Then there could be additional specialized qualifications above the base qualification, for example: security/certificates/cryptography, experimentation, execution performance, transmission/API management.

btrettel•4w ago
I think a big part of the problem is an overly narrow view of what a qualified candidate looks like from the hiring side. Tons of qualified people are rejected because they don't look qualified to the people hiring.

For example, recently a friend had an interview and the guy interviewing him seemed disappointed that my friend didn't have experience solving a problem in a particular way as if that were the only way to solve that problem. In my opinion, the way the interviewer solves that problem is inefficient. But they didn't seem to see any other way.

(Yes, a candidate can communicate their abilities better. But in my experience, this only goes so far, and the people hiring need to make more effort.)

A better process would be more open-minded and test itself by interviewing candidates who the interviewer thinks are bad. In science there's an idea called negative testing. If a test is supposed to separate good from bad, you can't just check what the test says is good, you also need to check what the test says is bad. If good things are marked as bad by the test, something's wrong with the test. If I were hiring, I'd probably start by filtering out people who don't meet very basic requirements and have some fairly open-ended interviews early with randomly selected people (who pass the initial screening) to refine the hiring process and help me realize gaps in my understanding.

Signatura•4w ago
I agree with this. What stands out to me is that the hiring process often treats one internal mental model as “correct”, and anything outside of it as a flaw in the candidate.

The example you gave about solving the same problem differently is common; different approaches get mistaken for lack of competence.

I like the negative testing idea a lot. If a hiring process never examines who it’s rejecting, it has no way to know whether it’s filtering quality or just filtering familiarity.

Have you seen teams actually test or evolve their hiring criteria this way, or does it usually stay fixed once defined?

btrettel•4w ago
> Have you seen teams actually test or evolve their hiring criteria this way, or does it usually stay fixed once defined?

I'm sure many folks hiring do iteratively improve their hiring criteria, though I'm skeptical of how rigorous their process is. For all I know they could make their hiring criteria worse over time! I have never been involved in a hiring decision, so what I write is from the perspective of a job candidate.

Signatura•3w ago
That makes sense, and I think your skepticism is reasonable.

From the candidate side, it’s almost impossible to tell whether criteria are being refined thoughtfully or just drifting based on recent hires or strong opinions in the room.

What strikes me is that without explicit feedback loops, iteration can easily turn into reinforcement, people conclude “this worked” without ever seeing the counterfactual of who was filtered out.

From the outside, it often looks less like a calibrated process and more like accumulated intuition. I’m curious whether that matches what others here have seen from the inside.