frontpage.
newsnewestaskshowjobs

Made with ♥ by @iamnishanth

Open Source @Github

Bitchat – A decentralized messaging app that works over Bluetooth mesh networks

https://github.com/jackjackbits/bitchat
28•ananddtyagi•29m ago•9 comments

Nobody has a personality anymore: we are products with labels

https://www.freyaindia.co.uk/p/nobody-has-a-personality-anymore
65•drankl•2h ago•41 comments

Intel's Lion Cove P-Core and Gaming Workloads

https://chipsandcheese.com/p/intels-lion-cove-p-core-and-gaming
48•zdw•2h ago•0 comments

Building the Rust Compiler with GCC

https://fractalfir.github.io/generated_html/cg_gcc_bootstrap.html
72•todsacerdoti•2h ago•1 comments

Show HN: I wrote a "web OS" based on the Apple Lisa's UI, with 1-bit graphics

https://alpha.lisagui.com/
234•ayaros•6h ago•77 comments

Centaur: A Controversial Leap Towards Simulating Human Cognition

https://insidescientific.com/centaur-a-controversial-leap-towards-simulating-human-cognition/
7•CharlesW•1h ago•1 comments

I extracted the safety filters from Apple Intelligence models

https://github.com/BlueFalconHD/apple_generative_model_safety_decrypted
242•BlueFalconHD•4h ago•144 comments

Jane Street barred from Indian markets as regulator freezes $566 million

https://www.cnbc.com/2025/07/04/indian-regulator-bars-us-trading-firm-jane-street-from-accessing-securities-market.html
212•bwfan123•10h ago•111 comments

Data on AI-related Show HN posts

https://ryanfarley.co/ai-show-hn-data/
215•rfarley04•2d ago•124 comments

A non-anthropomorphized view of LLMs

http://addxorrol.blogspot.com/2025/07/a-non-anthropomorphized-view-of-llms.html
83•zdw•2h ago•61 comments

Opencode: AI coding agent, built for the terminal

https://github.com/sst/opencode
115•indigodaddy•7h ago•27 comments

Get the location of the ISS using DNS

https://shkspr.mobi/blog/2025/07/get-the-location-of-the-iss-using-dns/
254•8organicbits•12h ago•75 comments

Functions Are Vectors (2023)

https://thenumb.at/Functions-are-Vectors/
146•azeemba•9h ago•79 comments

I don't think AGI is right around the corner

https://www.dwarkesh.com/p/timelines-june-2025
122•mooreds•3h ago•149 comments

Backlog.md – Markdown‑native Task Manager and Kanban visualizer for any Git repo

https://github.com/MrLesk/Backlog.md
75•mrlesk•4h ago•15 comments

Lessons from creating my first text adventure

https://entropicthoughts.com/lessons-from-creating-first-text-adventure
24•kqr•2d ago•1 comments

Crypto 101 – Introductory course on cryptography

https://www.crypto101.io/
16•pona-a•3h ago•1 comments

Curzio Malaparte's Shock Tactics

https://www.newyorker.com/books/under-review/curzio-malapartes-shock-tactics
3•mitchbob•3d ago•1 comments

Async Queue – One of my favorite programming interview questions

https://davidgomes.com/async-queue-interview-ai/
86•davidgomes•7h ago•68 comments

Metriport (YC S22) is hiring engineers to improve healthcare data exchange

https://www.ycombinator.com/companies/metriport/jobs/Rn2Je8M-software-engineer
1•dgoncharov•7h ago

Why English doesn't use accents

https://www.deadlanguagesociety.com/p/why-english-doesnt-use-accents
54•sandbach•3h ago•46 comments

Corrected UTF-8 (2022)

https://www.owlfolio.org/development/corrected-utf-8/
36•RGBCube•3d ago•23 comments

Hannah Cairo: 17-year-old teen refutes a math conjecture proposed 40 years ago

https://english.elpais.com/science-tech/2025-07-01/a-17-year-old-teen-refutes-a-mathematical-conjecture-proposed-40-years-ago.html
333•leephillips•9h ago•74 comments

The Broken Microsoft Pact: Layoffs and Performance Management

https://danielsada.tech/blog/microsoft-pact/
22•dshacker•1h ago•5 comments

Toys/Lag: Jerk Monitor

https://nothing.pcarrier.com/posts/lag/
46•ptramo•10h ago•36 comments

Mirage: First AI-Native UGC Game Engine Powered by Real-Time World Model

https://blog.dynamicslab.ai
17•zhitinghu•23h ago•11 comments

Collatz's Ant and Σ(n)

https://gbragafibra.github.io/2025/07/06/collatz_ant5.html
22•Fibra•7h ago•3 comments

Paper Shaders: Zero-dependency canvas shaders

https://github.com/paper-design/shaders
6•nateb2022•2d ago•0 comments

Overclocking LLM Reasoning: Monitoring and Controlling LLM Thinking Path Lengths

https://royeisen.github.io/OverclockingLLMReasoning-paper/
47•limoce•11h ago•0 comments

"Do not highlight any negatives"

https://www.google.com/search?q=%22do+not+highlight+any+negatives%22+site%3Aarxiv.org
14•bgc•1h ago•3 comments
Open in hackernews

LLMs for Engineering: Teaching Models to Design High Powered Rockets

https://arxiv.org/abs/2504.19394
124•tamassimond•2mo ago

Comments

Workaccount2•2mo ago
My hypothesis is until they can really nail down image to text and text to image, such that training on diagrams and drawings can produce fruitful multi modal output, classic engineering is going to be a tough nut to crack.

Software engineering lends itself greatly to LLMs because it just fits so nicely into tokenization. Whereas mechanical drawings or electronic schematics are sort of more like a visual language. Image art but with very exacting and important pixel placement, with precise underlying logical structure.

In my experience so far, only O3 can kind of understand an electronic schematic, but really only at a "Hello World!" level difficulty. I don't know how easy it will be to get to the point where it can render a proper schematic or edit one it is given to meet some specified electronic characteristics.

There are programming languages that are used to define drawings, but the training data would be orders of magnitude less than what is written for humans to learn from.

slicktux•2mo ago
Electrical schematics can be represented with linear algebra and Boolean logic… Maybe their being able to “understand” such schematics is just a matter of them becoming better at mathematical logic…which is pretty objective.
davemp•2mo ago
Not entirely true. Routing is a very important part of electrical schematics.
echoangle•2mo ago
Is it? Isn’t that more like PCB design? The schematic is just the abstract connection of components, right?
davemp•2mo ago
I would consider a PCB schematic to be part of an electrical schematic. Even if you don’t, you still have to consider final layout because some lines will need EMF protection. The linear equations and boolean algebra are just a (extremely useful) model after all.
nyrikki•2mo ago
This paper works because it explicitly is a problem domain that was intentionally constrained to ensure safety in the Amateur high-power rocket hobby. Specifically with constraints and standards that were developed for teenagers of various skill to do with paper and pen well before they had access to containers. While modern applications have added more functions, those core constrains remain.

It works explicitly because it doesn't hit the often counter-intuitive limitations with generalization in pure math.

Remember that Boolean circuit satisfiability is NP-complete, and is beyond UHAT's + poly length CoT expressibility, which is capped at PTIME.

Even int logic with boolean circuits is in PSPACE.

When you start to deal with values, you are going to have to add in heuristics and/or find reductions that will cost your generalizability.

Even if you model analog circuits as finite labelled directed graphs with labelled vertices, similar to what Shannon used; removing some of the real world electrical impacts and focus on them as computational units, the complexity can get crazy fast.

Those circuits, with specific constraints (IIRC local feedback, etc..) can be simulated by a Turing machine, but require ELEMENTARY space or time, and despite it's name ELEMENTARY is iterated exponential: 2^2^2^2^2^...^n with k n's.

Also note that P/poly, viewed as problems that can be solved by small circuits is not a practical class and in fact contains all of the unary languages that we know are unsolvable by real computers in the general case.

That apparent paradox that P/poly, which has small bool circuits, also contains all of those undecidable unary languages is a good starter into that rat hole.

While we will have tools and models that are better at math logic, the constrains are actually limits on computation in the general case. Generalization often has these types of costs, and the RL benefits in this case relate to demonstrating that IMHO.

heisenzombie•2mo ago
My experience is that SOTA LLMs still struggle to read even the metadata from a mechanical drawing. They're getting better -- they now are mostly ok at reading things like a BOM or revision table -- but moderately complicated title blocks often trip them up.

As for the drawings themselves, I have found them pretty unreliable at reading even quite simple things (i.e. what's the ID of the thru hole?), even when they're specifically dimensioned. As soon as spatial reasoning is required (i.e. there's a dimension from A to B and from A to C and one asks for the dimension B to C), they basically never get it right.

This is a place where there's a LOT of room for improvement.

Terr_•2mo ago
I'm scared of something like the Xerox number-corruption bug [0], where some models will subtly fuck everything up in a way that is too expensive to recover from by the time it's discovered.

[0] https://www.dkriesel.com/en/blog/2013/0802_xerox-workcentres...

tintor•2mo ago
Problem #1 with text-to-image models is that focus is on producing visually attractive photo-realistic artistic images, which is completely orthogonal from what is needed for engineering: accurate, complete, self-consistent, and error-free diagrams.

Problem #2 is low control over outputs of text-to-image models. Models don't follow prompts well.

yieldcrv•2mo ago
Tell it how to read schematics in the prompt
neodypsis•2mo ago
Try one of the models with good vision capabilities and ask it to output code using build123d.
flipflipper•2mo ago
Try having it output the circuit in SPICE. It actually works surprisingly well and does a good job picking out components values for parts and can describe the connectivity well. It falls apart when it writes the SPICE (professionally, there isn’t really one well accepted syntax really)and making the wires to connect your components, like you say missing the minds eye. But I can imagine adding a ton spice schematics with detailed descriptions with maybe an LLM optimized SPICE syntax to the training data set… it’ll be designing and simulating circuits in no time.
kurthr•2mo ago
Yeah, how to you thing that schematic is represented internally? How do you think the netlist is modeled? It's SPICE and HDL all the way down!

There are good reasons not to vibecode Verilog, but a lot of test cases are already being written by LLMs and the big EDA vendors (Cadence, Synopsys, Siemens) all tout their new AI capabilities.

It's like saying it can't read handwritten mathematical formulas, when it solves most math problems in markup (and if you aren't using it you're asking for trouble).

flipflipper•2mo ago
I brainfarted a bit and mixed up my attempts with making LTSPICE asc schematics (which are the text representations of the GUI sch, with wires) with the normal node based SPICE syntax. I just tried this specifically asking for spice to run with ngspice to run in a CLI. Seemed to run great! Going to play around with this for a bit now…
discordance•2mo ago
Mechanical drawings and schematics are visualizations for humans.

If you look at the data structure of a gerber or DWG, it’s vectors and metadata. These happen to be great for LLMs.

My hypothesis is that we haven’t done the work on that yet because the market is more interested in things like Ghibli imagery.

danielbln•2mo ago
Ate you being facetious or is that really your hypothesis?
notahacker•2mo ago
Not the OP, but Ghibli imaging doesn't kill people or make things stop working if it falls into uncanny valley territory, so the bar for a useful product is lower than a "designer" based on a NN which has ingested annotated CAD files...
jayd16•2mo ago
More like there isn't a resource of trillions of user generated schematics uploaded to the big tech firms that they can train on for free by skirting fair use laws.
imranq•2mo ago
You can describe a diagram with markdown like mermaid, so you can at least understand state changes and processes which are core to engineering.
rjsw•2mo ago
Programming languages don't really define drawings. There are several standards for the data models behind the exchange file formats used in engineering though.

Someone could try training a LLM on a combination of a STEP AP242 [1] data model and sample exchange files, or do the same for the Building Information Model [2].

[1] http://www.ap242.org/ [2] https://en.wikipedia.org/wiki/Industry_Foundation_Classes

aaron695•2mo ago
I think what might work is people coming together around this LLM like a God.

Similar to Rod of Iron Ministries (The Church of the AR-15) Taking what is says, fine tuning it, testing it, feeding back in and mostly waiting as LLMs improve.

LLMs will never be smarter than humans, but they can be a meeting place where people congregate to work on goals and worship.

Like QAnon, that's where the collective IQ and power comes from, something to believe in. At the micro level this is also mostly how LLMs are used in practical ways.

If you look to the Middle East there is a lot of work on rockets but a limited community working together.

otabdeveloper4•2mo ago
Okay. As long as they don't start sacrificing virgins to the Prompt Gods.
akomtu•2mo ago
Imagine a fake engineer who read books about engineering as scifi, and thanks to his superhuman memory, he's mastered the engineer-speak so well that he sounds more engineery than top engineers in the world. Except that he has no clue about engineering and to him it's the same as literature or prose. Now he's tasked with designing a bridge. He pauses for a second and starts speaking, in his usual polished style: "sure, let me design a bridge for you." And while he's talking, he's starring at you with his perfect blank face expression, for his mind is blank as well.

Think of the absurdity of trying to understand the Pi number by looking at its first billion digits and trying to predict the next digit. And think of what it takes to advance from memorizing digits of such numbers and predicting continuation with astrology-style logic to understanding the math behind the digits of Pi.

DaiPlusPlus•2mo ago
> Think of the absurdity of trying to understand the Pi number by looking at its first billion digits and trying to predict the next digit. And think of what it takes to advance from memorizing digits of such numbers and predicting continuation with astrology-style logic to understanding the math behind the digits of Pi.

I'm prepared to believe that a sufficiently advanced LLM around today will have some "neural" representation of a generalization of a Taylor Series, thus allowing it to "natively predict" digits of Pi.

discreteevent•2mo ago
> I'm prepared to believe that a sufficiently advanced LLM

This is the opposite of engineering/science. This is animism.

otabdeveloper4•2mo ago
I want to believe, man. Just two more layers and this thing finally becomes a real boy.
walleeee•2mo ago
Anthropic had a recent paper on why llms can't even get e.g. simple arithmetic consistently correct, much less generalize the concept of infinite series. The finding was that they don't find a way to represent the mechanics of an operation, they build chains of heuristics that sometimes happen to work.
kneegerman•2mo ago
Sometimes I feel this website, very much like LLMs themselves, prove that handling of language in general and purple prose in particular have absolutely no (as in 0) correlation with intelligence.
DaiPlusPlus•2mo ago
I suspect your definition of "intelligence" differs from mine.
weq•2mo ago
You have decribed enron musk perfectly without probably even meaning to. I concur that we have "software engineers" in every role at our tech company now that the general populous has learnt how to use chatgtp. This leads to some interesting conversations as above.
buescher•2mo ago
It's worse than that. Imagine he's consistently plausibly wrong about everything, but when you point that out, people think it's just sour grapes at how smart he is.
imtringued•2mo ago
That's not even the worst part. The worst part is that there are people who fit this description as well, and the singularity crowd anthropomorphizes the "human" flaws of the AI as proof of human level intelligence.
revskill•2mo ago
How about halting problem ? I see llm often got infinite recursive problem.
simianwords•2mo ago
More evidence that we need fine tuned domain specific models. Some one should come up with a medical LLM fine tuned on a 640b model. What better RL dataset can you have than a patient with symptoms and the correct diagnosis?
frumiousirc•2mo ago
A fundamental problem with this entire class of machine learning is that it is based on a model / simulation of reality. "RocketPy, a high-fidelity trajectory simulation library for high-power rocketry" in this case.

Nothing against this sim in particular but all such simulations that attempt to model any non-trivial system are imperfect. Nature is just too complex to model precisely and accurately. The LLM (or other DL network architecture) will only learn information that is presented to it. When trained on simulation the network can not help but infer incorrectly about messy reality.

For example, if RocketPy lacks any model of cross breezes, the network would never learn to design to counter them. Or, if it does model variable winds but does so with the wrong mean, or variance, or skew (of intensity, period, etc) the network can not properly learn and the design will not be optimal. The design will fail when it faces reality that differs from model.

Replace "rocket" with any other thing and you have AI/ML applied to science and engineering - fundamentally flawed, at least at some level of precision/accuracy.

At the least, real learning on reality is required. Once we can back-propagate through nature, then perhaps DL networks can begin to be actually trustworthy for science and engineering.

londons_explore•2mo ago
> all such simulations that attempt to model any non-trivial system are imperfect.

I believe the future of such simulation is to start from the lowest level - ie. schrodinger's equation, and get the simulator to derive all higher level stuff.

Obviously the higher level models are imperfect, but then it's the AI's job to decide if a pile of soil needs to be simulated as a load of grains of sand, or as crystals of quartz, or as atoms of silicon, or as quarks...

The AI can always check its answer by redoing a lower level simulation of a tiny part of the result, and check it is consistent with a higher level/cheaper simulation.

xigency•2mo ago
> I believe the future of such simulation is to start from the lowest level - ie. schrodinger's equation, and get the simulator to derive all higher level stuff.

I do hate to burst your bubble here but I've been doing real-time simulation (in the form of games, 2D, 3D, VR) for enough decades to know this is only a pipe-dream.

Maybe at the point when we have a Dyson sphere and have all universally agreed upon the principles that cause an airfoil to generate lift this would be possible, otherwise it's orders of magnitude beyond all of the terrestrial compute that we have now.

To quote Han Solo, the way we do effective and convincing science and simulation now is ... "a lot of simple tricks and nonsense."

londons_explore•2mo ago
I don't think it's a pipe dream from an 'amount of compute' perspective.

Any competent person can simulate 100 atoms in a crystal of some material, and say "whoa, it seems the bulk of this material behaves like a spring with f=kx, lets replace the individual atom simulation with a bulk simulation which is computationally far cheaper", and then we can simulate trillions of atoms.

I don't see why AI couldn't do the same.

xigency•2mo ago
One trillion atoms of the heaviest element is less than a nanogram. I get your point it's just that we can't even simulate all the blades of grass on a one acre lawn with every shortcut we have.

Really I think it would be cool to explore -- I've been working on a procedural game engine (conceptually at least) for a long time and want to incorporate even "basic" things like chemistry. I think it's still decades away for that, not even considering quantum phenomena.

diggan•2mo ago
I don't think it's a "problem" as much as it is a "tradeoff". You basically have two approaches to take here: 1) try to simulate as best as you can, iteratively improve the simulation space after trying it out in real-life, and go back and forth or 2) skip the simulation step and do the same process but only in real-life, not having any simulation step at all and only rely on real scenarios, but few of them.

Considering how fast you can go with simulations vs real launches, I'm not surprised the took the first approach.

1W6MIC49CYX9GAP•2mo ago
Accurate simulation is also an AI problem, but that should be a separate paper
theptip•2mo ago
> A fundamental problem with this entire class of machine learning is that it is based on a model / simulation of reality… all such simulations that attempt to model any non-trivial system are imperfect

Depends on what your goal is. If you are trying to solve the narrow problem of rocketry or whatever, sure. But maybe not if your goal is making models smarter.

The broader context is that we need new oracles beyond math and programming in order to exercise CoT models on longer planning-horizon tasks.

In this case, if working with a toy world model lets you learn generalizable strategies (I bet it does, as video games do too) then this sort of eval can be a useful addition.

rel_ic•2mo ago
I think doing stuff like this probably has more downsides than upsides.
FilosofumRex•2mo ago
Established engineering firms are trying to incorporate LLMs into their fancy simulation software, but that's counterproductive, just like professors who use LLMs to write new textbooks!

We need innovative disruptors to train LLMs to do engineering from ground up and to make calls to simulation software/routines when they need specialized/unique datapoints.

tmaly•1mo ago
I am waiting for a new type of LLM that can understand primitives in either 2D or 3D and can construct vector art or 3D models.

I have seen some demos of Claude being connected to Blender etc. But when I dug into the code, it was using another LLM to generate the objects rather than building the objects from fundamental shapes.