frontpage.
newsnewestaskshowjobs

Made with ♥ by @iamnishanth

Open Source @Github

fp.

Utah's hottest new power source is 15k feet below the ground

https://www.gatesnotes.com/utahs-hottest-new-power-source-is-below-the-ground
124•mooreds•3h ago•74 comments

How the "Kim" dump exposed North Korea's credential theft playbook

https://dti.domaintools.com/inside-the-kimsuky-leak-how-the-kim-dump-exposed-north-koreas-credent...
153•notmine1337•4h ago•20 comments

A Navajo weaving of an integrated circuit: the 555 timer

https://www.righto.com/2025/09/marilou-schultz-navajo-555-weaving.html
60•defrost•3h ago•9 comments

Shipping textures as PNGs is suboptimal

https://gamesbymason.com/blog/2025/stop-shipping-pngs/
41•ibobev•3h ago•15 comments

I'm Making a Beautiful, Aesthetic and Open-Source Platform for Learning Japanese

https://kanadojo.com
37•tentoumushi•2h ago•11 comments

C++26: Erroneous Behaviour

https://www.sandordargo.com/blog/2025/02/05/cpp26-erroneous-behaviour
12•todsacerdoti•1h ago•8 comments

Troubleshooting ZFS – Common Issues and How to Fix Them

https://klarasystems.com/articles/troubleshooting-zfs-common-issues-how-to-fix-them/
14•zdw•3d ago•0 comments

A history of metaphorical brain talk in psychiatry

https://www.nature.com/articles/s41380-025-03053-6
10•fremden•1h ago•2 comments

Qwen3 30B A3B Hits 13 token/s on 4xRaspberry Pi 5

https://github.com/b4rtaz/distributed-llama/discussions/255
277•b4rtazz•13h ago•115 comments

Over 80% of Sunscreen Performed Below Their Labelled Efficacy (2020)

https://www.consumer.org.hk/en/press-release/528-sunscreen-test
87•mgh2•4h ago•79 comments

We hacked Burger King: How auth bypass led to drive-thru audio surveillance

https://bobdahacker.com/blog/rbi-hacked-drive-thrus/
272•BobDaHacker•10h ago•148 comments

The maths you need to start understanding LLMs

https://www.gilesthomas.com/2025/09/maths-for-llms
454•gpjt•4d ago•99 comments

Oldest recorded transaction

https://avi.im/blag/2025/oldest-txn/
135•avinassh•9h ago•59 comments

What to Do with an Old iPad

http://odb.ar/blog/2025/09/05/hosting-my-blog-on-an-iPad-2.html
40•owenmakes•1d ago•27 comments

Anonymous recursive functions in Racket

https://github.com/shriram/anonymous-recursive-function
46•azhenley•2d ago•12 comments

Stop writing CLI validation. Parse it right the first time

https://hackers.pub/@hongminhee/2025/stop-writing-cli-validation-parse-it-right-the-first-time
56•dahlia•5h ago•20 comments

Using Claude Code SDK to reduce E2E test time

https://jampauchoa.substack.com/p/best-of-both-worlds-using-claude
96•jampa•6h ago•66 comments

Matmul on Blackwell: Part 2 – Using Hardware Features to Optimize Matmul

https://www.modular.com/blog/matrix-multiplication-on-nvidias-blackwell-part-2-using-hardware-fea...
7•robertvc•1d ago•0 comments

GigaByte CXL memory expansion card with up to 512GB DRAM

https://www.gigabyte.com/PC-Accessory/AI-TOP-CXL-R5X4
41•tanelpoder•5h ago•38 comments

Microsoft Azure: "Multiple international subsea cables were cut in the Red Sea"

https://azure.status.microsoft/en-gb/status
100•djfobbz•3h ago•13 comments

Why language models hallucinate

https://openai.com/index/why-language-models-hallucinate/
133•simianwords•16h ago•147 comments

Processing Piano Tutorial Videos in the Browser

https://www.heyraviteja.com/post/portfolio/piano-reader/
25•catchmeifyoucan•2d ago•6 comments

Gloria funicular derailment initial findings report (EN) [pdf]

https://www.gpiaaf.gov.pt/upload/processos/d054239.pdf
9•vascocosta•2h ago•6 comments

AI surveillance should be banned while there is still time

https://gabrielweinberg.com/p/ai-surveillance-should-be-banned
461•mustaphah•10h ago•169 comments

Baby's first type checker

https://austinhenley.com/blog/babytypechecker.html
58•alexmolas•3d ago•15 comments

Qantas is cutting executive bonuses after data breach

https://www.flightglobal.com/airlines/qantas-slashes-executive-pay-by-15-after-data-breach/164398...
39•campuscodi•2h ago•9 comments

William James at CERN (1995)

http://bactra.org/wm-james-at-cern/
13•benbreen•1d ago•0 comments

Rug pulls, forks, and open-source feudalism

https://lwn.net/SubscriberLink/1036465/e80ebbc4cee39bfb/
242•pabs3•18h ago•118 comments

Rust tool for generating random fractals

https://github.com/benjaminrall/chaos-game
4•gidellav•2h ago•0 comments

Europe enters the exascale supercomputing league with Jupiter

https://ec.europa.eu/commission/presscorner/detail/en/ip_25_2029
50•Sami_Lehtinen•4h ago•34 comments
Open in hackernews

A Software Development Methodology for Disciplined LLM Collaboration

https://github.com/Varietyz/Disciplined-AI-Software-Development
80•jay-baleine•13h ago

Comments

sublinear•11h ago
This may produce some successes, but it's so much more work than just writing the code yourself that it's pointless. This structured way of working with generative AI is so strict that there is no scaling it up either. It feels like years since this was established to be a waste of time.

If the goal is to start writing code not knowing much, it may be a good way to learn how and establish a similar discipline within yourself to tackle projects? I think there's been research that training wheels don't work either though. Whatever works and gets people learning to write code for real can't be bad, right?

weego•10h ago
It's just a function of how much code you need to write, and how much un-interrupted time you have.

Editing this kind of configuration has far less cognitive load and loading time, so distractions aren't as destructive to the task as they are when coding. You can then also structure time so that productive agent coding can be happening while you're doing business critical tasks like meetings / calls etc.

I do think this is overkill though, and it's a bad plan and far too early to try and formalize The One Way To Instruct AI How To Code, but every advance is an opportunity to gain career traction so fair play.

jay-baleine•10h ago
What tends to get overlooked is the actual development speeds these projects achieve.

The PhiCode runtime for example - a complete programming language with code conversion, performance optimization, and security validation. It was built in 14 days. The commit history provides trackable evidence; manual development of comparable functionality would require months of work as a solo developer.

The "more work" claim doesn't hold up to measurement. AI generates code faster than manual typing while systematic constraints prevent the architectural debt that creates expensive refactoring cycles later. The 5-minute setup phase establishes foundations that enable consistent development throughout the project.

On scalability, the runtime demonstrates 70+ modules maintaining architectural consistency. The 150-line constraint forced modularization that made managing these components feasible - each remains comprehensible and testable in isolation. The approach scales by sharing core context (main entry points, configuration, constants, benchmarks) rather than managing entire codebases.

Teams can collaborate effectively under shared architectural constraints without coordination overhead.

This isn't about training wheels or learning syntax. The methodology treats AI as a systematic development partner focused on architectural thinking rather than ad-hoc prompting. AI handles syntax perfectly - the challenge lies in directing it toward maintainable, scalable solutions at production speed.

Previous attempts at structured AI collaboration may have failed, but this approach addresses specific failure modes through empirical measurement rather than theoretical frameworks.

The perceived 'strictness' provides flexibility within proven constraints. Developers retain complete freedom in implementation approaches, but the constraints prevent common pitfalls like monolithic files or tangled dependencies - like guardrails that keep you on the road.

The project examples and commit histories provide concrete evidence for these development speeds and architectural outcomes.

gravypod•9h ago
> The PhiCode runtime for example - a complete programming language with code conversion, performance optimization, and security validation. It was built in 14 days. The commit history provides trackable evidence; manual development of comparable functionality would require months of work as a solo developer.

I've been looking at the docs and something I don't fully understand is what PhiCode Runtime does? It seems like:

1. Mapping of ligatures -> keywords (ex: ƒ -> def).

2. Caching of 3 types (source content, python parsing, module imports, and python bytecode).

3. Call into phirust-transpiler which seems to try and convert things into rust code?

4. An http api for requesting these operations.

A lot of this seems to be done with regexs. Was there a motivation for doing string replace instead of python -> ast -> conversion -> new ast -> source? What is this code being used for?

CuriouslyC•9h ago
Claude Code (and claude in general, which was 99% used here) likes regexes for this sort of thing. You have to tell it to use tree sitter, or it'll make a brittle solution by default.
jay-baleine•9h ago
Your four points are correct:

1. Symbol mapping: Yes - ƒ → def, ∀ → for, λ → lambda, π → print, etc. Custom mappings are configurable.

2. Multi-layer caching: Confirmed - source content cache, transpiled Python cache, module import specs, and optimized bytecode with batch writes.

3. PhiRust acceleration: Clarification - it's a Rust-based transpiler that handles the symbol-to-Python conversion for performance, not converting Python to Rust. When files exceed 300KB, the system delegates transpilation to the Rust binary instead of using Python regex processing.

4. HTTP API: Yes - provides endpoints for transpilation, symbol mapping queries, and engine info to enable IDE integration.

The technical decision to use string replacement over AST manipulation came down to measured performance differences.

The benchmarks show 3,000,000+ chars/sec throughput on extreme stress tests and 1,200,000+ chars/sec on typical workloads. Where AST parsing, transformation, and regeneration introduces overhead that makes real-time symbol conversion impractical for large codebases.

The string replacement preserves exact formatting, comments, and whitespace while maintaining compatibility with any Python syntax. Including future language features that AST parsers might not support yet. Each symbol maps directly to its Python equivalent without intermediate representations that can introduce transformation errors.

The cache system includes integrity validation to detect corrupted cache entries and automatic cleanup of temporary files. Cache invalidation occurs when source files change, preventing stale transpilation results. Batch write operations with atomic file replacement ensure cache consistency under concurrent access.

The runtime serves cognitive improvements for domain-specific development. Mathematical algorithms become more readable when written with actual mathematical notation rather than verbose keywords. It can help in game development, where certain functions can benefit from different naming (eg.: def → skill, def → special, def → equipment).

The gradual adoption path matters for production environments. Teams can introduce custom syntax incrementally without rewriting existing codebases since the transpiled output remains standard Python. The multi-layer caching system ensures that symbol conversion overhead doesn't impact execution performance.

Domain-specific languages for mathematics, finance, education, or any field where visual clarity improves comprehension. The system maintains full Python compatibility while enabling cognitive improvements through customizable syntax.

UncleEntity•5h ago
> Where AST parsing, transformation, and regeneration introduces overhead that makes real-time symbol conversion impractical for large codebases.

I don't really understand why you need to do anything different when using a parser than the regex method, there's no real reason to have to parse to an AST (with all the python goodness involved with that) at all when the parser can just do the string replacement the same as whatever PhiRust is doing.

I have this peg VM (based on the lpeg papers) I've been poking at for a little while now that, while admittedly I haven't actually tested its speed, I'd be amazed if it couldn't do 3Mb/s...in fact, the main limiting factor seems to be getting bytes off the disk and the parser runtime is just noise compared to that with all the 'musttail' shenanigans going on.

And even that is overkill for simple keyword replacement with all the work done over the years on macro systems needing to be blazing fast -- which is not something I've looked into at all to see how they do their magic except a brief peek at C's macro rules which are, let's just say, complicated.

visarga•8h ago
> The perceived 'strictness' provides flexibility within proven constraints. Developers retain complete freedom in implementation approaches, but the constraints prevent common pitfalls like monolithic files or tangled dependencies - like guardrails that keep you on the road.

I agree, the only way to use AI is to constrain it, to provide a safe space where it can bang against the walls to iterate towards the solution. I use documentation, plans and tests as constraint system.

CuriouslyC•9h ago
It's not. I can get a detailed spec in place via back and forth with chatgpt + some templates + a validation service in 10 minutes that will consistently get an agent to power for 3+ hours with the end result being 85% test coverage, E2E user story testing, etc so when I come back to the project I'm only doing acceptance testing.

The velocity taking yourself out of the loop with analytic guardrails buys is just insane, I can't overstate it. The clear plan/guardrails are important though, otherwise you end up with a pile of slop that doesn't work and is unmaintainable.

CuriouslyC•10h ago
The most important thing is to have a strong plan cycle in front of you agent work, if you do that, agents are very reliable. You need to have a deep research cycle that basically collects a covering set of code that might need to be modified for a feature, feeds it into gemini/gpt5 to get a broad codebase level understanding, then has a debate cycle on how to address it, with the final artifact being a hyper detailed plan that goes file by file and provides an outline of changes required.

Beyond this, you need to maintain good test coverage, and you need to have agents red-team your tests aggressively to make sure they're robust.

If you implement these two steps your agent performance will skyrocket. The planning phase will produce plans that claude can iterate on for 3+ hours in some cases, if you tell it to complete the entire task in one shot, and the robust test validation / change set analysis will catch agents solving an easier problem because they got frustrated or not following directions.

skydhash•10h ago
By that point I would have already produced the 20 line diff for the ticket. Huge commits (or change requests) are usually scaffolding, refactoring, or design changes to support new features. You also got generated code and verbose language like CSS. So stuff where the more knowledge you have about the code, the faster you can be.

The daily struggle was always those 10 line diffs where you have to learn a lot (from the stakeholder, by debugging, from the docs).

CuriouslyC•9h ago
A deep plan cycle will find stuff like this, because it's looking at the whole relevant portion of your codebase at once (and optionally the web, your internal docs, etc). It'll just generate a very short plan for the agent.

The important thing is that this process is entirely autonomous. You create an issue, that hooks the planners, the completion of a plan artifact hooks a test implementer, the completion of tests hooks the code implementer(s, with cheaper models generating multiple solutions and taking the best diff works well), the completion of a solution + PR hooks code+security review, test red teaming, etc.

esafak•10h ago
Modern agentic tools already draw up plans before implementation. Some even define "plan" and "build" agents: https://opencode.ai/docs/agents/#built-in
CuriouslyC•9h ago
Agents are really bad at planning, unless the agent is farming out the plan to a deep research tool, as your codebase grows things are gonna end badly.
mehdibl•10h ago
The most important you need always to do:

1. Plan, review the plan.

2. Review the code during changes before even it finish and fix ASAP you see drift.

3. Then again review

4. Add tests & use all quality tools don't rely 100% on LLM.

5. Don't trust LLM reviews for own produced code as it's very biased.

This is basic steps that you do as you like.

Avoid FULL AUTOMATED AGENT pipeline where you review the code only at the end unless it's very small task.

CuriouslyC•9h ago
LLMs can review their own code, but you must have a fresh context (so they don't know they wrote it) and you need to instruct them to be very strict. Also, some models are better at code review than others, Gemini/GPT5 are very good at it as long as you give them sufficient codebase context, Claude is not so great here.
perrygeo•9h ago
There's some irony; far from handling the details, LLMs are forcing programmers to adopt hyper-detailed, disciplined practices. They've finally cajoled software developers into writing documentation! Worth noting we've always had the capacity to implement these practices to improve HUMAN collaboration, but rarely bothered.
grork•8h ago
We’ve ultimately decided to treat the models with more respect, nurturing, and collaborative support than we ever did our follow human keyboard smashers. Writing all the documentation, detailed guidance, allowing them multiple attempts, to help the LLMs be successful. But Brenda, the early in career new grad? “please read this poorly written, 5 year-old, incomplete wiki, and don’t ask me questions”

I’ve been thinking about this for months, and still don’t know what to make of it.

pydry•7h ago
The halo effect around LLMs is something crazy.
henrebotha•6h ago
I would also be motivated to write better documentation if I had a junior dev sitting right next to me, utterly incapable of doing any work unless I document how; but also instantly acting on documentation I produce, and giving me rapid feedback on which parts of the documentation are sending the wrong message.
ianbicking•4h ago
Respect (or lack thereof) goes both ways: both the writer and reader. I have frequently felt disrespected by producing documentation, planning/etc that isn't read. In the end I mostly rely on oral transmission of knowledge because then at least I can read the room and know if I'm providing some value to people, and ultimately we're both trapped in the room together and have to invest the same amount of time.

The LLM isn't always smart, but it's always attentive. It rewards that effort in a way that people frequently don't. (Arguably this is a company culture issue, but it's also a widespread issue.)

perrygeo•1h ago
Great framing of the problem. I do think it's a culture issue with "Agile" practices in particular - By design, there is no time budgeted for reading, writing, reflection, or discussion. Sprint, sprint, sprint.

In organizations that value innovation, people will spend time reading and writing. It's a positive feedback loop, almost a litmus test of quality of the work culture.

j45•8h ago
Lol, Empathy and communication skills are important to develop after all.
627467•6h ago
What i keep seeing missing for AI-labor replacement discussions is that technology may seem to replace human labor, but it doesn't really replace human accountability.

Organizations many times seem capable to diffuse blame for mistakes within their human beaurocracy but as beaurocracy is reduced with AI, individuals become more exposed.

This alone - in my view - is sufficient counterpressure to fully replace humans in organizations.

Shorter reply: if my AI setup fails I'm the one to blame. If I do a bad job at helping coworkers perform better is the blame fully mine?

ares623•5h ago
I wonder if this is what will kill LLMs in the software development domain.

It turns out that writing and maintaining documentation is just that universally hated.

ianbicking•4h ago
My experience writing in a professional setting is that people mostly don't read what I write, and the more effort I put into being thorough the less likely that it will be read.
wheelerwj•3h ago
That is an interesting observation. You're correct, the LLM inheritly reads and digests ever token you offer it.
jemiluv8•6h ago
Reminds me when I was demonstrating Claude code to a friend recently. My friend was a huge cursor user and was just curious about the cli tool and stuff.

In the end, regardless of framework or approach, I believe there is a way to go about using llms that will optimize work for developers. I worked with this tech lead who reviews all PRs and insists on imports arranged in a specific order. I found it insulting but did it anyway. Now I don’t - the not does.

The same way that llms can be really helpful in planning and building out specific things like rest endpoints, small web components, single functions or classes and so on.

Glad people are attempting to work on such potential solution for approaching work to take advantage of these new tools

jmull•6h ago
Lucky LLMs. All I get are forwarded meandering email chains and to attend almost entirely discursive meetings.
kanak8278•5h ago
Can anyone help me on how to integrate this with Claude-Code? I went through it, I already follow few things manually but when I think of integrating most of the parts (not all), I don't know where should I put it for the Coding-LLM to understand. I fear if I put everything in Claude.md it will be just too much context for the CC.
wheelerwj•3h ago
In Claude code you can definitely put most of this into claude.md, much of it in a global claude.md and some in the project claude.md.

which part are you specifically uncertain about?

rsecora•4h ago
Back in the day, when business computing emerged (COBOL, Mainframes...), it appear the distinction between systems analysts and programmers. Analyst understand business needs, programmers implemented those specs in code.

Years later, the industry evolved to integrate both roles, and new methodologies and new roles appear.

Now humans write specs, and AI agents write code. Role separation is a principle of labor division since Plato.