frontpage.
newsnewestaskshowjobs

Made with ♥ by @iamnishanth

Open Source @Github

Complete silence is always hallucinated as "ترجمة نانسي قنقر" in Arabic

https://github.com/openai/whisper/discussions/2608
145•edent•2h ago•56 comments

Global hack on Microsoft Sharepoint hits U.S., state agencies, researchers say

https://www.washingtonpost.com/technology/2025/07/20/microsoft-sharepoint-hack/
588•spenvo•1d ago•263 comments

Uv: Running a script with dependencies

https://docs.astral.sh/uv/guides/scripts/#running-a-script-with-dependencies
280•Bluestein•8h ago•80 comments

AI comes up with bizarre physics experiments, but they work

https://www.quantamagazine.org/ai-comes-up-with-bizarre-physics-experiments-but-they-work-20250721/
155•pseudolus•6h ago•71 comments

An unprecedented window into how diseases take hold years before symptoms appear

https://www.bloomberg.com/news/articles/2025-07-18/what-scientists-learned-scanning-the-bodies-of-100-000-brits
26•helsinkiandrew•3d ago•7 comments

Jujutsu for busy devs

https://maddie.wtf/posts/2025-07-21-jujutsu-for-busy-devs
144•Bogdanp•7h ago•141 comments

What happens when an octopus engages with art?

https://www.cnn.com/2025/07/17/style/what-happens-when-an-octopus-engages-with-art
19•robinhouston•4d ago•7 comments

Kapa.ai (YC S23) is hiring a software engineers (EU remote)

https://www.ycombinator.com/companies/kapa-ai/jobs/JPE2ofG-software-engineer-full-stack
1•emil_sorensen•51m ago

What went wrong inside recalled Anker PowerCore 10000 power banks?

https://www.lumafield.com/article/what-went-wrong-inside-these-recalled-power-banks
391•walterbell•13h ago•184 comments

Nasa’s X-59 quiet supersonic aircraft begins taxi tests

https://www.nasa.gov/image-article/nasas-x-59-quiet-supersonic-aircraft-begins-taxi-tests/
64•rbanffy•2d ago•38 comments

AccountingBench: Evaluating LLMs on real long-horizon business tasks

https://accounting.penrose.com/
454•rickcarlino•15h ago•118 comments

Don't bother parsing: Just use images for RAG

https://www.morphik.ai/blog/stop-parsing-docs
246•Adityav369•14h ago•65 comments

TrackWeight: Turn your MacBook's trackpad into a digital weighing scale

https://github.com/KrishKrosh/TrackWeight
530•wtcactus•17h ago•129 comments

AI could have written this: Birth of a classist slur in knowledge work [pdf]

https://advait.org/files/sarkar_2025_ai_shaming.pdf
25•deverton•5h ago•32 comments

Look up macOS system binaries

https://macosbin.com
30•tolerance•3d ago•5 comments

Losing language features: some stories about disjoint unions

https://graydon2.dreamwidth.org/318788.html
74•Bogdanp•3d ago•19 comments

Erlang 28 on GRiSP Nano using only 16 MB

https://www.grisp.org/blog/posts/2025-06-11-grisp-nano-codebeam-sto
147•plainOldText•12h ago•8 comments

New records on Wendelstein 7-X

https://www.iter.org/node/20687/new-records-wendelstein-7-x
214•greesil•16h ago•91 comments

The Game Genie Generation

https://tedium.co/2025/07/21/the-game-genie-generation/
120•coloneltcb•13h ago•51 comments

The surprising geography of American left-handedness (2015)

https://www.washingtonpost.com/news/wonk/wp/2015/09/22/the-surprising-geography-of-american-left-handedness/
32•roktonos•10h ago•18 comments

He Rewrote Everything in Rust – Then We Got Fired

https://medium.com/@ThreadSafeDiaries/he-rewrote-everything-in-rust-then-we-got-fired-293e3e16c2d3
3•wallflower•3d ago•3 comments

Tokyo's retro shotengai arcades are falling victim to gentrification

https://www.theguardian.com/world/2025/jul/18/cult-of-convenience-how-tokyos-retro-shotengai-arcades-are-falling-victim-to-gentrification
35•pseudolus•3d ago•11 comments

What will become of the CIA?

https://www.newyorker.com/magazine/2025/07/28/the-mission-the-cia-in-the-21st-century-tim-weiner-book-review
92•Michelangelo11•13h ago•148 comments

Scarcity, Inventory, and Inequity: A Deep Dive into Airline Fare Buckets

https://blog.getjetback.com/scarcity-inventory-and-inequity-a-deep-dive-into-airline-fare-buckets/
100•bdev12345•12h ago•37 comments

We have made the decision to not continue paying for BBB accreditation

https://mycherrytree.com/blogs/news/why-we-have-made-the-decision-to-not-continue-paying-for-accreditation-from-the-better-business-bureau-bbb
90•LorenDB•5h ago•46 comments

Workers at Snopes.com win voluntary recognition

https://newsguild.org/workers-at-snopes-com-win-voluntary-union-recognition/
96•giuliomagnifico•4h ago•4 comments

I know genomes. Don't delete your DNA

https://stevensalzberg.substack.com/p/i-know-genomes-dont-delete-your-dna
49•bookofjoe•12h ago•62 comments

Occasionally USPS sends me pictures of other people's mail

https://the418.substack.com/p/a-bug-in-the-mail
171•shayneo•16h ago•168 comments

I've launched 37 products in 5 years and not doing that again

https://www.indiehackers.com/post/ive-launched-37-products-in-5-years-and-not-doing-that-again-0b66e6e8b3
142•AlexandrBel•19h ago•129 comments

Show HN: Lotas – Cursor for RStudio

https://www.lotas.ai/
67•jorgeoguerra•13h ago•26 comments
Open in hackernews

AI comes up with bizarre physics experiments, but they work

https://www.quantamagazine.org/ai-comes-up-with-bizarre-physics-experiments-but-they-work-20250721/
155•pseudolus•6h ago

Comments

anonym00se1•5h ago
Feels like we're going to see a lot of headlines like this in the future.

"AI comes up with bizarre ___________________, but it works!"

ninetyninenine•5h ago
That’s how we become numb to the progress. Like think of this in the context of a decade ago. The news would’ve been amazing.

Imagine these headlines mutating slowly into “all software engineering performed by AI at certain company” and we will just dismiss it as generic because being employed and programming with keyboards is old fashioned. Give it twenty years and I bet this is the future.

hammyhavoc•4h ago
Twenty bucks says it isn't.
somenameforme•4h ago
A decade ago it wouldn't have been called AI, and it probably shouldn't be called AI today because it's absurdly misleading. It's a python program that "uses gradient descent combined with topological optimization to find minimal graphs corresponding to some target quantum experiment".

Of course today call something "AI" and suddenly interest, and presumably grant opportunities, increase by a few orders of magnitude.

ninetyninenine•3h ago
Gradient descent is a learning algorithm. This is AI.
somenameforme•3h ago
Hahah, if you're going to go that route you may as well call all of math "AI", which is probably where we're headed anyhow! Gradient descent is used in training LLM systems, but it's no more "AI" itself than e.g. a quadratic regression is.
ordu•2h ago
Neural networks are on the hype now, but it doesn't mean that there was no AI before them. It was, it struggled to solve some problems, and to some of them it found solutions. Today people tend to reject everything that is not neural net as not "AI". If it is not neural net, then it is not AI, but general CS. However AI research generated a ton of algorithms for searching, and while gradient descent (I think) was not invented as a part of AI research, AI research adapted the idea to discrete spaces in multiple ways.

OTOH, AI is very much a search in multidimensional spaces, it is so into it, that it would probably make sense to say that gradient descent is an AI tool. Not because it is used to train neural networks, but because the specialty of AI is a search in multidimensional spaces. People probably wouldn't agree, like they don't agree that Fundamental Theorem of Algebra is not of algebra (and not fundamental btw). But the disagreement is not about the deep meaning of the theorem or gradient descent, but about tradition and "we always did it this way".

omnicognate•2m ago
Gradient descent is used in machine learning, which is a field in AI, to train models (eg. Neural networks) on data. You get some data and use gradient descent to pick the parameters (eg. neural network weights) to minimise the error on that training data. You can then use your trained model by putting other data into it and getting its outputs.

The researchers in this article didn't do that. They used gradient descent to choose from a set of experiments. The choice of experiment was the end result and the direct output of the optimisation. Nothing was "learned" or "trained".

Gradient descent and other optimisation tools are used in machine learning, but long predate machine learning and are used in many other fields. Taking "AI" to include "anything that uses gradient descent" would just render an already heavily abused term almost entirely meaningless.

JimDabell•3h ago
That’s been called AI for about thirty years as far as I am aware. I’m pretty sure I first ran into it studying AI at uni in the 90s, reading Norvig’s Artificial Intelligence: A Modern Approach. This is just the AI Effect at work.

https://en.wikipedia.org/wiki/AI_effect

dns_snek•15m ago
You're taking intelligently designed specialized optimization algorithms like the one in this article and trying to use their credibility and success to further inflate the hype of general-purpose LLMs that had nothing to do with this discovery.
viraptor•4h ago
We've seen this for a while, just not as often: antennas, IC, FPGA design, small mechanical things, ...
sandspar•52m ago
"AI comes up with a bizarre short-form generative video genre that addicts user in seconds - but it works!" I'm guessing we're only a year or two away.
eleveriven•46m ago
Entering the "hold my beer" era of AI creativity
amelius•40m ago
... sometimes.
IAmGraydon•5h ago
This is the kind of thing I like to see AI being used for. That said, as is noted in the article, this has not yet led to new physics or any indication of new physics.
markasoftware•5h ago
not an LLM, in case you're wondering. From the PyTheus paper:

> Starting from a dense or fully connected graph, PyTheus uses gradient descent combined with topological optimization to find minimal graphs corresponding to some target quantum experiment

aeternum•5h ago
More hype than substance unfortunately.

The AI rediscovered an interferometer technique the Russian's found decades ago, optimized a graph in an unusual way and came up with a formula to better fit a dark matter plot.

rlt•4h ago
The discovering itself doesn’t seem like the interesting part. If the discovery wasn’t in the training data then it’s a sign AI can produce novel scientific research / experiments.
hammyhavoc•4h ago
This is monkeys and typewriters.

It's like seeing things in clouds or tea leaves.

Supermancho•4h ago
If the "monkeys with typewriters" produces a Shakespear sonnet faster than he is reincarnated, it's a useful resource.

At least, that's the thinking.

shermantanktop•3h ago
That’s the looooong game on both counts.
tux1968•3h ago
'tis the patient plot on either side, where time doth weave its cunning, deep and wide.
coliveira•2h ago
It is 100% impossible for AIs to create a Shakespeare sonnet. They can create a pastiche of a sonnet, which is completely different.
andsoitis•2h ago
It can’t be a Shakespeare sonnet if Shakespeare didn’t write it
coliveira•2h ago
AI companies stole massive amounts of information from every book they could get. Do you really believe there's any research they don't have input into their training sets?
wizzwizz4•2h ago
It's not that kind of AI. We know that these algorithms can produce novel solutions. See https://arxiv.org/abs/2312.04258, specifically "Urania".
irjustin•4h ago
Ehhhhh, I'll say it's substantive and not just pure hype.

Yes the AI "resurfaced" the work, but it also incorporated the Russian's theory into the practical design. At least enough to say "hey make sure you look at this" - this means the system produced a workable-something w/ X% improvement, or some benefit that the researchers took it seriously and investigated. Obviously, that yielded an actual design with 10-15% improvement and a "wish we had this earlier" statement.

No one was paying attention to the work before.

omnicognate•47m ago
AFAICT the "AI" didn't "pay attention to the work" either. They built a representation of a set of possible experiments, defined an objective function quantifying what they wanted to optimise and used gradient descent to find the best experiment according to that objective function.

If I've understood it right, calling this AI is a stretch and arguably even misleading. Gradient descent is the primary tool of machine learning, but this isn't really using it the way machine learning uses it. It's more just an application of gradient descent to an optimisation problem.

The article and headline make it sound like they asked an LLM to make an experiment and it used some obscure Russian technique to make a really cool one. That isn't true at all. The algorithm they used had no awareness of the Russian research, or of language, or experimental design. It wasn't "trained" in any sense. It was just a gradient descent program. It's the researchers that recognised the Russian technique when analyzing the experiment the optimiser chose.

viraptor•4h ago
This sounds similar to evolved antennas https://en.wikipedia.org/wiki/Evolved_antenna

There are a few things like that where we can throw AI at a problem is generating something better, even if we don't know why exactly it's better yet.

JimDabell•4h ago
> Initially, the AI’s designs seemed outlandish. “The outputs that the thing was giving us were really not comprehensible by people,” Adhikari said. “They were too complicated, and they looked like alien things or AI things. Just nothing that a human being would make, because it had no sense of symmetry, beauty, anything. It was just a mess.”

This description reminds me of NASA’s evolved antennae from a couple of decades ago. It was created by genetic algorithms:

https://en.wikipedia.org/wiki/Evolved_antenna

carabiner•2h ago
[mandatory GA antenna post requirement satisfied]
esperent•1h ago
That evolved antenna is a piece of wire with exactly 6 bends. It's extremely simple, the exact opposite of a hard to understand mess.
JimDabell•37m ago
This physics experiment:

> Just nothing that a human being would make, because it had no sense of symmetry, beauty, anything. It was just a mess.

NASA describing their antenna:

> It has an unusual organic looking structure, one that expert antenna designers would not likely produce.

— https://ntrs.nasa.gov/citations/20060024675

The parallel seems obvious to me.

eleveriven•48m ago
That evolved antenna looked like something cobbled together by a drunk spider
ElFitz•6m ago
There was something similar about using evolutionary algorithms to produce the design for a mechanical piece used to link two cables or anchor a bridge’s cable, optimizing for weight and strength.

The design seemed alien and somewhat organic, but I can’t seem to find it now.

luketaylor•4h ago
Referring to this type of optimization program just as “AI” in an age where nearly everyone will misinterpret that to mean “transformer-based language model” seems really sloppy
andai•4h ago
That's how I feel about Web 3.0...
buu700•4h ago
Web 3(.0) always makes me think of the time around 14 years ago when Mark Zuckerberg publicly lightly roasted my room mate for asking for his predictions on Web 4.0 and 5.0.
tomrod•3h ago
I know, but can we blame the masses for misunderstanding AI when they are deliberately misinformed that transformers are the universe of AI? I think not!
zeofig•3h ago
Absolutely agree.
bee_rider•3h ago
How can one article be expected to fix the problem of people sloppily using “AI” when they mean LLM or something like that?
rachofsunshine•2h ago
I use "ML" when talking about more traditional/domain specific approaches, since for whatever reason LLMs haven't hijacked that term in the same way. Seems to work well enough to avoid ambiguity.

But I'm not paid by the click, so different incentives.

smj-edison•2h ago
Generative AI vs artificial neural network is my go-to (though ML is definitely shorter than ANN, lol).
IanCal•33m ago
Huge amounts of ml have nothing to do with ANNs and transformers are ANNs.
smj-edison•24m ago
I stand corrected! What are your go-tos?
rachofsunshine•6m ago
Not the person you're replying to, but there are tons of models that aren't neural networks. Triplebyte used to use random forests [1] to make a decision to pass or fail a candidate given a set of interview scores. There are a bunch of others, though, like naive Bayes [2] or k-nearest-neighbors [3]. These approaches tend to need a lot less of a training set and a lot less compute than neural networks, at the cost of being substantially less complex in their reasoning (but you don't always need complexity).

[1] https://en.wikipedia.org/wiki/Random_forest

[2] https://en.wikipedia.org/wiki/Naive_Bayes_classifier#Trainin...

[3] https://en.wikipedia.org/wiki/K-nearest_neighbors_algorithm

Nevermark•2h ago
I like that.

AI for attempts at general intelligence. (Not just LLMs, which already have a name … “LLM”.)

ML for any iterative inductive design of heuristical or approximate relationships, from data.

AI would fall under ML, as the most ambitious/general problems. And likely best be treated as time (year) relative, i.e. a moving target, as the quality of general models to continue improve in breadth and depth.

Lionga•2h ago
Just do not use AI for anything except LLMs anymore. Same way that crypto scam has taken the word crypto.

crypto must now be named cryptography and AI must now be named ML to avoid giving the scammers and hypers good press.

ItsHarper•1h ago
Yep. I dislike it just as much as ceding crypto, but at the end of the day language changes, and clarity matters.

I think image and video generation that aren't based on LLMs can also use the term AI without causing confusion.

a_victorp•1h ago
Just don't use the term AI. It has no well defined meaning and is mostly intended as a marketing term
vanviegen•50m ago
So, "don't do marketing" is your advice?
dns_snek•33m ago
Correct, "an editorially independent online publication launched by the Simons Foundation in 2012 to enhance public understanding of science" shouldn't be doing marketing and contributing to the problem.
pharrington•2h ago
I'll bet that almost everyone who reads Quanta Magazine knows what they mean by AI.
fragmede•2h ago
Thinking "nearly everyone" has that precise definition of AI seems way more sloppy. Most people haven't even heard of OpenAI and ChatGPT still, but among people who have, they've probably heard stories about AI in science fiction. My definition of AI is any advanced computer processing, generative or otherwise, that's happened since we got enough computing power and RAM to do something about it, aka lately.
IanCal•31m ago
Then that definition is at odds with how the field has used it for many decades.

You can have your own definition of words but it makes it harder to communicate.

saithound•1h ago
Referring to this type of optimization as AI in the age where nearly everybody is looking to fund transformer-based language models and nobody is looking to fund this kind of optimization is just common sense though.
benterix•51m ago
You are both right. Because the term "AI" is so vague and can mean so many things, it will be used and abused in various ways.

For me, when someone says, "I'm working on AI", it's almost meaningless. What are you doing, actually?

benterix•47m ago
I think it's actually this repo:

https://github.com/artificial-scientist-lab/GWDetectorZoo/

Nothing remotely LLM-ish, but I'm glad they used the term AI here.

advael•29m ago
This exact kind of sloppy equivocation does seem to be one of the major PR strategies that tries to justify the massive investment in and sloppy rollout of transformer-based language models when large swaths of the public have turned against this (probably even more than is actually warranted)
qz_kb•3h ago
This is not "AI", it's non-linear optimization...
tomrod•3h ago
We all do math down here.
topspin•2h ago
"It added an additional three-kilometer-long ring between the main interferometer and the detector to circulate the light before it exited the interferometer’s arms."

Isn't that a delay line? The benefit being that when the undelayed and delayed signals are mixed, the phase shift you're looking for is amplified.

heisenbit•51m ago
Sounds like ring lasers. Not really an unusual concept to increase sensitivity.
smj-edison•1h ago
Am I understanding the article correctly that the created a quantum playground, and then set thein algorithm to work optimizing the design within the playgrounds' constranits? That's pretty cool, especially for doing graph optimization. I'd be curious to know how compute intensive it was.
eleveriven•53m ago
Feels like we're entering a new kind of scientific method. Not sure if that's thrilling or terrifying, but definitely fascinating
matt3210•37m ago
The "AI" here is not the same "AI" as claude, Grok or OpenAI. It's just an optimization algorithm that tries different things in parallel until it finds a better solution to inform the next round.
kristjank•32m ago
Impressive results, I remember reading about AI-generated microstrip RF filters not too long ago, and someone already mentioned evolved antenna systems. We are suffering from a severe case of calling gradient descent AI at the moment, but if it gets more money into actual research instead of LLM slop, I'm all for it.
IanCal•29m ago
> We are suffering from a severe case of calling gradient descent AI at the moment,

We’ve been doing that for decades, it’s just more recently that it’s come with so much more funding.

carabiner•6m ago
I still call computers "adding machines." Total fad devices.
Huxley1•30m ago
This AI-designed experiment is pretty cool. It seemed kind of weird at first, but since it actually works, it’s worth paying attention to. AI feels more like a powerful tool that helps us think outside the box and come up with fresh ideas. Is AI more of a helper or a creator when it comes to research?
theteapot•28m ago
AFAICT "The AI" (which is never actually described in the article) is a CSOP solver.