frontpage.
newsnewestaskshowjobs

Made with ♥ by @iamnishanth

Open Source @Github

fp.

Show HN: UI testing using multimodal LLMs

https://kodefreeze.com
1•kodefreeze•42s ago•0 comments

San Jose Mayor: CA's proposed wealth tax push burden onto middle class families [video]

https://www.youtube.com/watch?v=muVVOjJsLG8
1•donsupreme•1m ago•0 comments

Max Payne – two decades later – Graphics Critique

https://darkcephas.blogspot.com/2021/07/max-payne-two-decades-later-graphics.html
2•davikr•6m ago•0 comments

Show HN: Just published a hard-SF novel Voyager1 returns with a quantum palantir

https://www.amazon.com/dp/B0GFSMP572
1•dufbugderopa•11m ago•1 comments

Orca: A New Architecture for Efficient AGI Through Parent-Teacher Learning

https://x.com/EricOmnigenius/article/2009656779945451932
2•ericspecullaas•13m ago•0 comments

A curated list of awesome explorable explanations

https://github.com/blob42/awesome-explorables
2•vitalnodo•16m ago•0 comments

The Declining Value of Personal Advice

https://www.gojiberries.io/the-declining-value-of-interpersonal-advice/
1•neehao•16m ago•0 comments

Show HN: Artdots: The benefits of creating a side project

https://artdots.co/blog/artdots-the-benefits-of-creating-a-side-project
1•veliona•20m ago•0 comments

Nvidia Announces Alpamayo Open-Source AI Models to Accelerate Reasoning-Based AV

https://nvidianews.nvidia.com/news/alpamayo-autonomous-vehicle-development
2•lateforwork•23m ago•0 comments

Ask HN: Before codebase review, replace all vars containing simple with complex?

1•gitprolinux•23m ago•0 comments

Show HN: Umaro – An interactive music theory suite for guitarists

https://www.umaro.app/
1•SnowingXIV•24m ago•0 comments

Zluda run unmodified CUDA on non Nvidia hw

https://www.phoronix.com/news/ZLUDA-CUDA-13.1-Compatibility
2•gigatexal•29m ago•0 comments

"About a decade ago... I developed an automated theorem-proving framework"

https://twitter.com/getjonwithit/status/2009602836997505255
2•Ariarule•29m ago•0 comments

Tool for live presentations using manim

https://github.com/jeertmans/manim-slides
1•vitalnodo•33m ago•0 comments

Workers at Redmond SpaceX lab exposed to toxic chemicals

https://www.fox13seattle.com/video/fmc-w1ga4pk97gxq0hj5
6•SilverElfin•35m ago•0 comments

Ask HN: When has a "dumb" solution beaten a sophisticated one for you?

3•amadeuswoo•47m ago•2 comments

Why some clothes shrink in the wash – and how to 'unshrink' them

https://www.swinburne.edu.au/news/2025/08/why-some-clothes-shrink-in-the-wash-and-how-to-unshrink...
1•OptionOfT•51m ago•0 comments

Show HN: VAM Seek – 2D video navigation grid, 15KB, zero server load

https://github.com/unhaya/vam-seek
5•haasiy•52m ago•0 comments

A curated list of free courses with certifications

https://github.com/cloudcommunity/Free-Certifications
3•javatuts•58m ago•0 comments

Bruno – local and Git-native solution to accelerate and secure API

https://www.usebruno.com/
1•javatuts•58m ago•0 comments

OpenAI is reportedly asking contractors to upload real work from past jobs

https://techcrunch.com/2026/01/10/openai-is-reportedly-asking-contractors-to-upload-real-work-fro...
13•pseudolus•1h ago•0 comments

Datadog, thank you for blocking us

https://www.deductive.ai/blogs/datadog-thank-you-for-blocking-us
34•gpi•1h ago•1 comments

Google moonshot spinout SandboxAQ claims an ex-exec is attempting 'extortion'

https://techcrunch.com/2026/01/09/google-moonshot-spinout-sandboxaq-claims-an-ex-exec-is-attempti...
1•Geekette•1h ago•0 comments

The new vs. used car debate is dead. They're both expensive debt traps

https://washingtonpost.com/business/2026/01/10/1000-payments-car-debt-trap/
6•pseudolus•1h ago•1 comments

Show HN: Reverse-engineering images into model-specific syntax(MJ,Nano,Flux,SD)

https://promptslab.app/image-to-prompt
1•jackzhuo•1h ago•1 comments

Npmgraph – a web-based tool that visualizes NPM package dependencies

https://npmgraph.js.org/
2•javatuts•1h ago•0 comments

Show HN: Hashing Go Functions Using SSA and Scalar Evolution

https://github.com/BlackVectorOps/semantic_firewall
2•BlackVectorOps•1h ago•1 comments

Show HN: mister.jar – Modular MRJAR Files Made Easy

http://lingocoder.com/mrjar/mrjar.usage.html
1•burnerToBetOut•1h ago•0 comments

Culture Isn't Stagnating, You Guys Are Just Old

https://www.jenn.site/culture-isnt-stagnating-you-guys-are-just-old/
7•Analemma_•1h ago•4 comments

Show HN: I made an Android app which sends Health Connect data to your webhooks

https://github.com/mcnaveen/health-connect-webhook
1•mcnx097•1h ago•0 comments
Open in hackernews

Ask HN: Why does job search feel so unclear even for strong candidates?

4•Signatura•14h ago
For years, job search has been framed as a personal performance problem. If someone struggles to move forward, the assumption is usually missing skills, weak experience, or a poorly written CV. But after going through the process ourselves, we reached a different conclusion: the system itself is unclear.

The people behind this post have each been through multiple job searches where progress felt reactive rather than intentional. Even with solid backgrounds, the process often lacked structure, visibility, and meaningful feedback. Decisions were made with limited information, and effort did not always translate into learning.

One recurring issue was the absence of feedback loops. CVs were edited repeatedly without understanding what actually improved outcomes. Interviews were prepared for without clarity on how candidates were perceived. Rejections arrived without explanation, leaving people to change direction blindly.

Over time, it became apparent that job seekers are asked to make high-stakes decisions with almost no structure. What should be changed and why? Which signals matter at each stage? How do you distinguish between a positioning issue, a communication issue, or a simple lack of fit?

Most tools address isolated moments in the process. A CV template here. Interview tips there. But the job search itself remains fragmented, with no clear way to connect actions to outcomes.

This raises a broader question. What would it look like if job search were treated as a system rather than a set of disconnected tasks? Something with feedback, structure, and visibility, instead of guesswork and repetition.

Treating job search as a system rather than a series of disconnected tasks may be one of the most overlooked opportunities in how careers are navigated today.

Curious how others here think about this - where does the job search process break down most for you?

Comments

Signatura•14h ago
I’m one of the co-founders and went through this process myself. Not promoting anything here - genuinely interested in how others experience this and what helped create clarity.
winshaurya9•14h ago
searching job myself , how can i stand out different from the ultra showy candidates launching b2b vertical saas , when i have simpler project that solved problem for a smaller group of people that i build from scratch , every controller , every api , every fallback , making everything serverless for free deployment and decrease server load and have real interest to solve problem and giving hours into my art , without internal reach it seems hard to break into the industry , mind you i am still in college and confused that will i even stand a chance
Signatura•11h ago
I don’t think the gap you’re describing is about quality of work as much as how it gets interpreted.

What you described, building something end to end, making real tradeoffs, and caring about the problem is exactly the kind of signal people say they want, but it doesn’t always map cleanly to how hiring filters operate.

Being early in your career makes that mismatch louder, not smaller. Without context, depth can look like “small” and polish can look like “impact”. One thing that might help is making the reasoning behind your choices visible, not just the output.

When reviewers can see why you built things the way you did, it becomes easier to compare substance to surface. It’s normal to feel unsure at this stage, but from the outside, what you’re describing sounds like a real foundation, not a disadvantage. I wish you all the best!

sinenomine•14h ago
Monetary policy, software tax, post-covid hiring glut, pervasive mental health issues in HR professionals. For older pros there is also age discrimination. There is also underestimated factor of hiring by committee which more and more commonly disguises ethnic nepotism in hiring decisions.
Signatura•11h ago
I think that’s a fair list, and it highlights how much of the process sits outside the candidate’s control.

Macro forces, internal incentives, and human bias all stack on top of each other, and the candidate only sees the outcome, not the cause. What feels particularly hard is that all of these factors collapse into a single signal for the job seeker, a rejection with no explanation.

From your perspective, which of these has the biggest impact in practice, and which ones do you think are most invisible to candidates going through the process?

austin-cheney•13h ago
2 reasons

1. Poor signaling. There is a bunch of noise in both job requirements and resumes.

2. Unclear goals. Many technical job postings are not clear in what they want. This is not really the fault of the employer but more of an industry failure to identify qualifications.

As a result you get super talented people that cannot find work and simultaneously grossly unqualified people who easily find work that is substantially over paid for the expected level of delivery and responsibilities.

Signatura•11h ago
Austin, that makes sense. The signaling problem cuts both ways: Resumes try to compress complex ability into keywords, and job descriptions try to describe real work with abstract labels. A lot gets lost in between.

The unclear goals point is important too. When a role isn’t well-defined, hiring ends up optimizing for proxies rather than outcomes. Do you think this is mostly a language problem (how roles and experience are described), or a structural one where teams don’t actually agree internally on what success in the role looks like?

austin-cheney•11h ago
My experience tells me it is an expectation problem coupled against missing standards/baselines.

Most employers need a person in the seat doing the work and will lower their preferences to find enough candidates for a selection. Government does not do that. If candidates fail to meet the requirements for a government contract the seat just remains empty.

Consider how engineering works. An engineers resume will just list employment history, education, and awards. There is no need to fluff things up because engineers are required to have a license(s) and that demonstrates qualification. Software does not have that, so people have to explain their capabilities over and over.

Signatura•10h ago
That’s an interesting comparison... The licensing point highlights how much of the burden in software hiring sits on explanation rather than verification. Without shared baselines, candidates end up narrating their competence instead of pointing to an accepted signal. The expectation gap you describe also explains why requirements feel flexible in practice but rigid on paper. When the real goal is “get someone productive soon,” standards tend to bend quietly rather than evolve explicitly.

Do you think the absence of clear baselines is something the industry could realistically converge on, or is software work too varied for that to work in the way it does for licensed engineering?

austin-cheney•7h ago
Programming is writing logic, which is a universal quality. So the way I would do is to create a fictional programming language, provide some familiarity and training time immediately before a licensing exam (at the testing location), and then having the candidate solve real problems using the fictional language for the licensing exam. It tests for the ability to deliver solutions more than memorizing patterns or reproducing familiar conventions. Too many developers cannot write original logic.

Then there could be additional specialized qualifications above the base qualification, for example: security/certificates/cryptography, experimentation, execution performance, transmission/API management.

btrettel•13h ago
I think a big part of the problem is an overly narrow view of what a qualified candidate looks like from the hiring side. Tons of qualified people are rejected because they don't look qualified to the people hiring.

For example, recently a friend had an interview and the guy interviewing him seemed disappointed that my friend didn't have experience solving a problem in a particular way as if that were the only way to solve that problem. In my opinion, the way the interviewer solves that problem is inefficient. But they didn't seem to see any other way.

(Yes, a candidate can communicate their abilities better. But in my experience, this only goes so far, and the people hiring need to make more effort.)

A better process would be more open-minded and test itself by interviewing candidates who the interviewer thinks are bad. In science there's an idea called negative testing. If a test is supposed to separate good from bad, you can't just check what the test says is good, you also need to check what the test says is bad. If good things are marked as bad by the test, something's wrong with the test. If I were hiring, I'd probably start by filtering out people who don't meet very basic requirements and have some fairly open-ended interviews early with randomly selected people (who pass the initial screening) to refine the hiring process and help me realize gaps in my understanding.

Signatura•11h ago
I agree with this. What stands out to me is that the hiring process often treats one internal mental model as “correct”, and anything outside of it as a flaw in the candidate.

The example you gave about solving the same problem differently is common; different approaches get mistaken for lack of competence.

I like the negative testing idea a lot. If a hiring process never examines who it’s rejecting, it has no way to know whether it’s filtering quality or just filtering familiarity.

Have you seen teams actually test or evolve their hiring criteria this way, or does it usually stay fixed once defined?

btrettel•11h ago
> Have you seen teams actually test or evolve their hiring criteria this way, or does it usually stay fixed once defined?

I'm sure many folks hiring do iteratively improve their hiring criteria, though I'm skeptical of how rigorous their process is. For all I know they could make their hiring criteria worse over time! I have never been involved in a hiring decision, so what I write is from the perspective of a job candidate.

Signatura•10h ago
That makes sense, and I think your skepticism is reasonable.

From the candidate side, it’s almost impossible to tell whether criteria are being refined thoughtfully or just drifting based on recent hires or strong opinions in the room.

What strikes me is that without explicit feedback loops, iteration can easily turn into reinforcement, people conclude “this worked” without ever seeing the counterfactual of who was filtered out.

From the outside, it often looks less like a calibrated process and more like accumulated intuition. I’m curious whether that matches what others here have seen from the inside.