Similar to Rod of Iron Ministries (The Church of the AR-15) Taking what is says, fine tuning it, testing it, feeding back in and mostly waiting as LLMs improve.
LLMs will never be smarter than humans, but they can be a meeting place where people congregate to work on goals and worship.
Like QAnon, that's where the collective IQ and power comes from, something to believe in. At the micro level this is also mostly how LLMs are used in practical ways.
If you look to the Middle East there is a lot of work on rockets but a limited community working together.
Think of the absurdity of trying to understand the Pi number by looking at its first billion digits and trying to predict the next digit. And think of what it takes to advance from memorizing digits of such numbers and predicting continuation with astrology-style logic to understanding the math behind the digits of Pi.
I'm prepared to believe that a sufficiently advanced LLM around today will have some "neural" representation of a generalization of a Taylor Series, thus allowing it to "natively predict" digits of Pi.
This is the opposite of engineering/science. This is animism.
Nothing against this sim in particular but all such simulations that attempt to model any non-trivial system are imperfect. Nature is just too complex to model precisely and accurately. The LLM (or other DL network architecture) will only learn information that is presented to it. When trained on simulation the network can not help but infer incorrectly about messy reality.
For example, if RocketPy lacks any model of cross breezes, the network would never learn to design to counter them. Or, if it does model variable winds but does so with the wrong mean, or variance, or skew (of intensity, period, etc) the network can not properly learn and the design will not be optimal. The design will fail when it faces reality that differs from model.
Replace "rocket" with any other thing and you have AI/ML applied to science and engineering - fundamentally flawed, at least at some level of precision/accuracy.
At the least, real learning on reality is required. Once we can back-propagate through nature, then perhaps DL networks can begin to be actually trustworthy for science and engineering.
I believe the future of such simulation is to start from the lowest level - ie. schrodinger's equation, and get the simulator to derive all higher level stuff.
Obviously the higher level models are imperfect, but then it's the AI's job to decide if a pile of soil needs to be simulated as a load of grains of sand, or as crystals of quartz, or as atoms of silicon, or as quarks...
The AI can always check its answer by redoing a lower level simulation of a tiny part of the result, and check it is consistent with a higher level/cheaper simulation.
Considering how fast you can go with simulations vs real launches, I'm not surprised the took the first approach.
Workaccount2•11h ago
Software engineering lends itself greatly to LLMs because it just fits so nicely into tokenization. Whereas mechanical drawings or electronic schematics are sort of more like a visual language. Image art but with very exacting and important pixel placement, with precise underlying logical structure.
In my experience so far, only O3 can kind of understand an electronic schematic, but really only at a "Hello World!" level difficulty. I don't know how easy it will be to get to the point where it can render a proper schematic or edit one it is given to meet some specified electronic characteristics.
There are programming languages that are used to define drawings, but the training data would be orders of magnitude less than what is written for humans to learn from.
slicktux•11h ago
davemp•9h ago
echoangle•3h ago
davemp•32m ago
heisenzombie•10h ago
As for the drawings themselves, I have found them pretty unreliable at reading even quite simple things (i.e. what's the ID of the thru hole?), even when they're specifically dimensioned. As soon as spatial reasoning is required (i.e. there's a dimension from A to B and from A to C and one asks for the dimension B to C), they basically never get it right.
This is a place where there's a LOT of room for improvement.
Terr_•8h ago
[0] https://www.dkriesel.com/en/blog/2013/0802_xerox-workcentres...
tintor•10h ago
Problem #2 is low control over outputs of text-to-image models. Models don't follow prompts well.
yieldcrv•9h ago
neodypsis•7h ago
flipflipper•7h ago
discordance•4h ago
If you look at the data structure of a gerber or DWG, it’s vectors and metadata. These happen to be great for LLMs.
My hypothesis is that we haven’t done the work on that yet because the market is more interested in things like Ghibli imagery.