frontpage.
newsnewestaskshowjobs

Made with ♥ by @iamnishanth

Open Source @Github

fp.

Open in hackernews

Ask HN: What happens when capability decouples from credentials?

6•falsework•9h ago
Over the past 18 months, I've been collaborating with AI to build technical systems and conduct analytical work far outside my formal training. No CS degree, no background in the domains I'm working in, no institutional affiliation.

The work is rigorous. Someone with serious credentials has engaged and asked substantive questions. The systems function as designed. But I can't point to the traditional markers that would establish legitimacy—degrees, publications, years of experience in the field.

This isn't about whether AI "did the work." I made every decision, evaluated every output, iterated through hundreds of refinements. The AI was a tool that compressed what would have taken years of formal education into months of intensive, directed learning and execution.

Here's what interests me: We're entering a period where traditional signals of competence—credentials, institutional validation, experience markers—no longer reliably predict capability. Someone can now build sophisticated systems, conduct rigorous analysis, and produce novel insights without any of the credentials that historically signaled those abilities. The gap between "can do" and "should be trusted to do" is widening rapidly.

The old gatekeeping mechanisms are breaking down faster than new ones are forming. When credentials stop being reliable indicators of competence, what replaces them? How do we collectively establish legitimacy for knowledge and capability?

This isn't just theoretical—it's happening right now, at scale. Every day, more people are building things and doing work they have no formal qualification to do. And some of that work is genuinely good.

What frameworks should we use to evaluate competence when the traditional signals are becoming obsolete? How do we establish new language around expertise when terms like "expert," "rigorous," and "qualified" have been so diluted they've lost discriminatory power?

Comments

thenaturalist•9h ago
Adverserial work (be it agent or human).

The one difference between "can do" and "should be trusted to do" is the ability to systematically prove that "can do" holds up close to 100% of task instances and under adverserial conditions.

Hacking and pentesting are already scaling fully autonomously - and systematically.

For now, lower level targets aren't yet attractive as such scale requires sophisticated (state) actors, but that is going to change.

So building systems that white-hat prove your code is not only functional but competent are going to be critical not to be ripped apart by black-hat later on.

One nice example that applies this quite nicely is roborev [0] by the legendary Wes McKinney.

0: https://github.com/roborev-dev/roborev

falsework•9h ago
This is a good point. You're right that adversarial testing provides one form of validation that doesn't depend on credentials if the system holds up under systematic attack, that's evidence of competence regardless of who built it.

But I think there's a distinction worth making between technical robustness (does the code have vulnerabilities?) and epistemic legitimacy (should we trust the analysis/conclusions?).

Pentesting and formal verification can tell us whether a system is secure or functions correctly. That's increasingly automatable and credential-independent because the code either survives adversarial conditions or it doesn't.

But what about domains where validation is murkier? Cross-domain analysis, research synthesis, strategic thinking, design decisions? These require judgment calls where "correct" isn't binary. The work can be rigorous and well-reasoned without being formally provable.

The roborev example is interesting because code review is somewhat amenable to systematic validation. But we're also seeing AI collaboration extend into domains where adversarial testing isn't cleanly applicable—policy analysis, theoretical frameworks, creative work with analytical components.

I wonder if we need different validation frameworks for different types of work. Technical systems: adversarial testing and formal verification. Analytical/intellectual work: something else entirely. But what?

The deeper question: when the barrier to producing superficially plausible work drops to near-zero, how do we distinguish genuinely rigorous thinking from sophisticated-sounding nonsense? Credentials were a (flawed) heuristic for that. What replaces them in domains where adversarial testing doesn't apply?

moralestapia•2h ago
>traditional signals of competence—credentials, institutional validation, experience markers—no longer reliably predict capability

They never did anyway.

(And I do have those things ...)

Tell HN: Ralph Giles has died (Xiph.org| Rust@Mozilla | Ghostscript)

245•ffworld•11h ago•9 comments

SMTP server from scratch in Go – FSM, raw TCP, and buffer-oriented I/O

3•Jyotishmoy•1h ago•0 comments

Ask HN: What would you recommend a vibe coder learn about how all this works?

14•alexdobrenko•13h ago•14 comments

Ask HN: Why is my Claude experience so bad? What am I doing wrong?

4•moomoo11•2h ago•3 comments

Ask HN: Better hardware means OpenAI, Anthropic, etc. are doomed in the future?

3•kart23•6h ago•4 comments

Ask HN: Did YouTube change how it handles uBlock?

13•tefloon69•14h ago•7 comments

Ask HN: What are you working on? (February 2026)

327•david927•4d ago•1122 comments

Ask HN: Do sociotechnical pressures select for beneficial or harmful AI systems?

3•jerlendds•13h ago•1 comments

Who discovered grokking and why is the name hard to find?

2•asmodeuslucifer•4h ago•0 comments

Ask HN: What happens when capability decouples from credentials?

6•falsework•9h ago•3 comments

Ask HN: Tools to code using voice?

5•emerongi•20h ago•3 comments

Ask HN: How do you audit LLM code in programming languages you don't know?

5•syx•16h ago•5 comments

Ask HN: We're building a saving app for European savers and need GTM advice

3•AlePra00•13h ago•6 comments

Ask HN: If your OpenClaw could do 1 thing it currently can't, what would it be?

5•stosssik•11h ago•3 comments

Ask HN: How do founders demo real product without exposing sensitive data?

4•legitimate_key•12h ago•3 comments

Ask HN: How do you "step through" your own anxiety?

5•schneak•12h ago•7 comments

Ask HN: Are you using an agent orchestrator to write code?

30•gusmally•15h ago•45 comments

Ask HN: Would you use context-based "modes" in Instagram(work,study,sport,news)?

3•MatiasLaudonio•10h ago•2 comments

Ask HN: Why are electronics still so unrecyclable?

70•alexandrehtrb•1d ago•137 comments

Ask HN: How much PTO do you get?

2•SunshineTheCat•11h ago•6 comments

Ask HN: Best practices for AI agent safety and privacy

2•mw1•11h ago•0 comments

Ask HN: How to build text-to-app platforms?

2•desperado1•12h ago•1 comments

Ask HN: GPT-5.3-Codex being silently routed to GPT-5.2?

4•tardis_thad•13h ago•2 comments

Ask HN: What's the current state of ChatGPT Apps?

3•arthurlee•15h ago•1 comments

Ask HN: Has anyone achieved recursive self-improvement with agentic tools?

9•nycdatasci•1d ago•14 comments

Ask HN: Is Prettier extension working for you in Cursor?

2•vldszn•17h ago•0 comments

Ask HN: Anyone else get bricked by the macOS update?

2•bix6•18h ago•1 comments

Ask HN: Dumping GitHub for Forgejo for a free and open source project

4•th0th•20h ago•4 comments

Tell HN: GPT-5.3-codex is now available in the API

3•bigwheels•16h ago•0 comments

Ask HN: Why is everyone here so AI-hyped?

29•fandorin•2d ago•18 comments