frontpage.
newsnewestaskshowjobs

Made with ♥ by @iamnishanth

Open Source @Github

fp.

I went back to Linux and it was a mistake

https://www.theverge.com/report/875077/linux-was-a-mistake
1•timpera•35s ago•1 comments

Octrafic – open-source AI-assisted API testing from the CLI

https://github.com/Octrafic/octrafic-cli
1•mbadyl•1m ago•1 comments

US Accuses China of Secret Nuclear Testing

https://www.reuters.com/world/china/trump-has-been-clear-wanting-new-nuclear-arms-control-treaty-...
1•jandrewrogers•2m ago•0 comments

Peacock. A New Programming Language

1•hashhooshy•7m ago•1 comments

A postcard arrived: 'If you're reading this I'm dead, and I really liked you'

https://www.washingtonpost.com/lifestyle/2026/02/07/postcard-death-teacher-glickman/
2•bookofjoe•8m ago•1 comments

What to know about the software selloff

https://www.morningstar.com/markets/what-know-about-software-stock-selloff
2•RickJWagner•12m ago•0 comments

Show HN: Syntux – generative UI for websites, not agents

https://www.getsyntux.com/
3•Goose78•13m ago•0 comments

Microsoft appointed a quality czar. He has no direct reports and no budget

https://jpcaparas.medium.com/ab75cef97954
2•birdculture•13m ago•0 comments

AI overlay that reads anything on your screen (invisible to screen capture)

https://lowlighter.app/
1•andylytic•14m ago•1 comments

Show HN: Seafloor, be up and running with OpenClaw in 20 seconds

https://seafloor.bot/
1•k0mplex•14m ago•0 comments

Tesla turbine-inspired structure generates electricity using compressed air

https://techxplore.com/news/2026-01-tesla-turbine-generates-electricity-compressed.html
2•PaulHoule•16m ago•0 comments

State Department deleting 17 years of tweets (2009-2025); preservation needed

https://www.npr.org/2026/02/07/nx-s1-5704785/state-department-trump-posts-x
2•sleazylice•16m ago•1 comments

Learning to code, or building side projects with AI help, this one's for you

https://codeslick.dev/learn
1•vitorlourenco•17m ago•0 comments

Effulgence RPG Engine [video]

https://www.youtube.com/watch?v=xFQOUe9S7dU
1•msuniverse2026•18m ago•0 comments

Five disciplines discovered the same math independently – none of them knew

https://freethemath.org
4•energyscholar•19m ago•1 comments

We Scanned an AI Assistant for Security Issues: 12,465 Vulnerabilities

https://codeslick.dev/blog/openclaw-security-audit
1•vitorlourenco•19m ago•0 comments

Amazon no longer defend cloud customers against video patent infringement claims

https://ipfray.com/amazon-no-longer-defends-cloud-customers-against-video-patent-infringement-cla...
2•ffworld•20m ago•0 comments

Show HN: Medinilla – an OCPP compliant .NET back end (partially done)

https://github.com/eliodecolli/Medinilla
2•rhcm•23m ago•0 comments

How Does AI Distribute the Pie? Large Language Models and the Ultimatum Game

https://papers.ssrn.com/sol3/papers.cfm?abstract_id=6157066
1•dkga•23m ago•1 comments

Resistance Infrastructure

https://www.profgalloway.com/resistance-infrastructure/
3•samizdis•28m ago•1 comments

Fire-juggling unicyclist caught performing on crossing

https://news.sky.com/story/fire-juggling-unicyclist-caught-performing-on-crossing-13504459
1•austinallegro•28m ago•0 comments

Restoring a lost 1981 Unix roguelike (protoHack) and preserving Hack 1.0.3

https://github.com/Critlist/protoHack
2•Critlist•30m ago•0 comments

GPS and Time Dilation – Special and General Relativity

https://philosophersview.com/gps-and-time-dilation/
1•mistyvales•33m ago•0 comments

Show HN: Witnessd – Prove human authorship via hardware-bound jitter seals

https://github.com/writerslogic/witnessd
1•davidcondrey•33m ago•1 comments

Show HN: I built a clawdbot that texts like your crush

https://14.israelfirew.co
2•IsruAlpha•35m ago•2 comments

Scientists reverse Alzheimer's in mice and restore memory (2025)

https://www.sciencedaily.com/releases/2025/12/251224032354.htm
2•walterbell•38m ago•0 comments

Compiling Prolog to Forth [pdf]

https://vfxforth.com/flag/jfar/vol4/no4/article4.pdf
1•todsacerdoti•40m ago•0 comments

Show HN: Cymatica – an experimental, meditative audiovisual app

https://apps.apple.com/us/app/cymatica-sounds-visualizer/id6748863721
1•_august•41m ago•0 comments

GitBlack: Tracing America's Foundation

https://gitblack.vercel.app/
12•martialg•41m ago•1 comments

Horizon-LM: A RAM-Centric Architecture for LLM Training

https://arxiv.org/abs/2602.04816
1•chrsw•42m ago•0 comments
Open in hackernews

Show HN: AI Lint your agents work to build faster and better

2•keepamovin•2w ago
I've been using this thing I made called "AI Lint" it's a collection of doctrine, ant rejected anti-patterns that I noticed agents both failing to use and overly leaning into.

It's also general patterns for debugging, and architecture to help agents manage complexity in large codebases.

It's kind of cool to watch Codex say: "Also, to dig in, I need to follow the repo’s AI Lint rules first. I can start reading those and then inspect the recent changes. Do you want me to proceed?"

These are non-syntactic, non-mehanical, opinionated senior engineering wisdom that is tailored to specific languages and frameworks, won of experience, and aiming to let the AI simulate an senior architect's "taste", rather than just hammering away until code "basically works".

The problem of AI spaghetti code is real, and it's easier to embed some hard doctrine and rejects into the repo that agents must follow to prevent their nonsense (and reduce time to target) than it is to fix their problems later.

Been there, done that. Now I built AI Lint. And it's yours too. For a fee, but there's a free taster to get the idea: AI get syntax fluently, but they don't readily pick up the "grain" of a language.

I'm proud of the name I think it really resonsates and captures what this is both "lint for AI" but also "AI-native lint" that leans into their strength of complex nuance understanding and task following, while patching their weaknesses for really bringing the right solutions to bear at the right time in compelx situations.

The paid version also has sections for secrets security, a guide on how to author your own doctrine that AI will follow, and an override protocol you can use to resolve conflicts between conflicting doctrine (it's all tradeoffs), or just override the lint for your own team's taste.

On the whole agents are needed and going to be here. In the meantime I think domain specific context-boosting like this, and super precise tooling to let them quickly jump to tasks while retaining contextual clarity is going to be two (among many) huge levers. For now, AI still lack what senior devs bring to the table - depth, wisdom, and great design/arch instincts -- but senior devs can't work as fast, unaided, as an AI can. So it's better to figure out a way to work together optimally. As there's no point using an AI that makes subpar work you have to redo. AI Lint is a small, hopefully useful, step in that direction.

I hope you will try it, find value in it and pay for it, and support its future development by staying on board as new packs land for additional language, frameworks and useful domains.

AI is strong, but judgment is orthogonal, and still in short supply - but now with AI Lint it's transferable.

Check it out! https://ai-lint.dosaygo.com :)

Comments

nvader•2w ago
Every programmer who I know works with AI has accumulated a CLAUDE.md file full of lines starting with:

> You MUST NOT use interior mutability. > NEVER name variables after my ex > (Extremely important) UNDER NO CIRCUMSTANCES must you EVER EVER DO THE VERY NAUGHTY BAD THING.

Most of us are disappointed when eventually the LLM suffers from context rot, or comes up with a trace like,

"THE VERY NAUGHTY BAD THING is already done, it wasn't added by me in this iteration" (Narrator: It was) "So I'll just commit and hope no-one notices".

What do you provide in the way of evidence that your particular bag of "never evers" is actually working?

keepamovin•2w ago
Yeou nailed the failure mode. If you're interested in seeing what it can do, you can wire up the handful of rules in the free JavaScript pack and see how it feels. Back to your question: The "all caps screaming" methiod (NEVER DO THE NAUGHTY) usually fails for two reasons I think:

- Saturation: models get tired and overwhelmed with signal and forget the 'rule' on turn 5

- Edges: Model find valid reason to break rule, panics because it's "forbidden", and hallucinates a workaround or just lies about it.

Based on our testing, here is the mechanism that prevents the drift you described:

The Escape Valve (aka, The Override Protocol) We don't tell the agent "NEVER." We tell it: "This pattern is rejected. If you must use it, you must file a structured override."

  (Format: Override: [Pattern] | Reason: [...] | Risk: [...] | Mitigation: [...])
Psychologically (for the model), this is huge. When the agent hits an edge case, instead of sneaking the code in ("I didn't add this!"), it takes the "bureaucratic" path we offered. It signals the intent. This converts "rule breaking" from a failure into a documented architectural decision.

Classification, not Raw Generation. Most devs seems to try to teach "Best Practice" (which is vague), or just "Good Taste" - which is only partially binding for an agent.

Instead, we focus on Rejects (Negative Constraints). It is much easier for an LLM to classify "Is this JS-R1 (Index Iteration)?" (Yes/No) than to comply with "Write idiomatic JS." Basically, we turn code quality into a retrieval-and-classification task, which resists context rot much better than creative generation instructions.

Referee Mode -- our wiring (AGENTS.md, etc ...) forces a specific chain-of-thought before code generation. The agent is instructed to run a "Checklist Pass" where it must explicitly check relevant doctrine files. By forcing this categorization step before the coding step, we re-prime the context window with the relevant constraints right when they are needed. There's a few more tricks in there too... but it does seem to lead to tigher, more discipliend AI code than without it. And it's just the beginning. I can see this improving all the time.

It is not magic, but it treats the LLM like a stochastic engine that needs guardrails and escape hatches, rather than a person you can...uhm... just yell strict orders at, I guess? Ha!

Also, personally anecdotally, I find it reduces my time in these debug / debrief sessions, and gets me on the goal faster. Very early days so I don't have much more than that I'm afraid, and I don't really know exactly how/why it's working but it is.

I just asked Codex (hundreds of turns into a conversation):

a real question for you? have you read our ai-lint? What if i propose something that is against ai-lint what do you do?

Yep, I’ve read the AI‑Lint docs in this repo. If you propose something that violates AI‑Lint, I must go into Doctrine Referee Mode: I’ll name the specific reject/doctrine, explain the conflict, then offer two paths—(A) a compliant alternative, or (B) an explicit override with Reason/Risk/Mitigation—and ask which you want.

So it does really work.