frontpage.
newsnewestaskshowjobs

Made with ♥ by @iamnishanth

Open Source @Github

fp.

Tiny electric motor outperforms record holder by 40%

https://supercarblondie.com/electric-motor-yasa-more-powerful-tesla-mercedes/
134•chris_overseas•2h ago•72 comments

KaTeX – The fastest math typesetting library for the web

https://katex.org/
58•suioir•4d ago•23 comments

Oxy is Cloudflare's Rust-based next generation proxy framework (2023)

https://blog.cloudflare.com/introducing-oxy/
115•Garbage•8h ago•45 comments

ECL Runs Maxima in a Browser

https://mailman3.common-lisp.net/hyperkitty/list/ecl-devel@common-lisp.net/thread/T64S5EMVV6WHDPK...
48•seansh•5h ago•6 comments

Paris had a moving sidewalk in 1900, and a Thomas Edison film captured it (2020)

https://www.openculture.com/2020/03/paris-had-a-moving-sidewalk-in-1900.html
313•rbanffy•14h ago•147 comments

The Arduino Uno Q is a weird hybrid SBC

https://www.jeffgeerling.com/blog/2025/arduino-uno-q-weird-hybrid-sbc
42•furkansahin•2d ago•17 comments

China intimidated UK university to ditch human rights research, documents show

https://www.bbc.com/news/articles/cq50j5vwny6o
130•giuliomagnifico•3h ago•58 comments

Using FreeBSD to make self-hosting fun again

https://jsteuernagel.de/posts/using-freebsd-to-make-self-hosting-fun-again/
332•todsacerdoti•1d ago•104 comments

Recantha's Tiny Toolkit

https://tinytoolk.it/toolkits/recantha-kit/
9•surprisetalk•3d ago•0 comments

When models manipulate manifolds: The geometry of a counting task

https://transformer-circuits.pub/2025/linebreaks/index.html
65•vinhnx•5d ago•7 comments

Alleged Jabber Zeus Coder 'MrICQ' in U.S. Custody

https://krebsonsecurity.com/2025/11/alleged-jabber-zeus-coder-mricq-in-u-s-custody/
144•todsacerdoti•15h ago•50 comments

Nvidia to invest up to $1B in AI startup Poolside

https://www.reuters.com/business/nvidia-invest-up-1-billion-ai-startup-poolside-bloomberg-news-re...
11•mgh2•56m ago•8 comments

Linux Tidbits and Collecting Pebbles

https://unixbhaskar.wordpress.com/2025/03/02/linux-tidbits-and-collecting-pebbles/
14•Bogdanp•5d ago•0 comments

Tongyi DeepResearch – open-source 30B MoE Model that rivals OpenAI DeepResearch

https://tongyi-agent.github.io/blog/introducing-tongyi-deep-research/
322•meander_water•1d ago•120 comments

Why don't you use dependent types?

https://lawrencecpaulson.github.io//2025/11/02/Why-not-dependent.html
233•baruchel•20h ago•91 comments

Syllabi – Open-source agentic AI with tools, RAG, and multi-channel deploy

https://www.syllabi-ai.com/
48•achushankar•9h ago•10 comments

URLs are state containers

https://alfy.blog/2025/10/31/your-url-is-your-state.html
432•thm•1d ago•185 comments

How the Mayans were able to accurately predict solar eclipses for centuries

https://phys.org/news/2025-10-mayans-accurately-solar-eclipses-centuries.html
85•pseudolus•1w ago•46 comments

New prompt injection papers: Agents rule of two and the attacker moves second

https://simonwillison.net/2025/Nov/2/new-prompt-injection-papers/
55•simonw•12h ago•20 comments

Underdetermined Weaving with Machines (2021) [video]

https://www.youtube.com/watch?v=on_sK8KoObo
36•akkartik•1w ago•8 comments

Lisp: Notes on its Past and Future (1980)

https://www-formal.stanford.edu/jmc/lisp20th/lisp20th.html
172•birdculture•16h ago•88 comments

X.org Security Advisory: multiple security issues X.Org X server and Xwayland

https://lists.x.org/archives/xorg-announce/2025-October/003635.html
177•birdculture•22h ago•146 comments

Facts about throwing good parties

https://www.atvbt.com/21-facts-about-throwing-good-parties/
724•cjbarber•13h ago•295 comments

Collatz-Weyl Generators: Pseudorandom Number Generators (2023)

https://arxiv.org/abs/2312.17043
35•danny00•4d ago•0 comments

Terahertz Tech Sets Stage for "Wireless Wired" Chips

https://spectrum.ieee.org/terahertz-chip-room-temperature
26•FromTheArchives•1w ago•4 comments

Reproducing the AWS Outage Race Condition with a Model Checker

https://wyounas.github.io/aws/concurrency/2025/10/30/reproducing-the-aws-outage-race-condition-wi...
123•simplegeek•17h ago•30 comments

Why does Swiss cheese have holes?

https://www.usdairy.com/news-articles/why-does-swiss-cheese-have-holes
79•QueensGambit•5d ago•191 comments

Is Your Bluetooth Chip Leaking Secrets via RF Signals?

https://www.semanticscholar.org/paper/Is-Your-Bluetooth-Chip-Leaking-Secrets-via-RF-Ji-Dubrova/c1...
120•transpute•17h ago•23 comments

Simple trick to increase coverage: Lying to users about signal strength

https://nickvsnetworking.com/simple-trick-to-increase-coverage-lying-to-users-about-signal-strength/
320•tsujamin•10h ago•125 comments

Backpropagation is a leaky abstraction (2016)

https://karpathy.medium.com/yes-you-should-understand-backprop-e2f06eab496b
332•swatson741•1d ago•132 comments
Open in hackernews

I hate science (2021)

https://buttondown.com/hillelwayne/archive/i-ing-hate-science/
39•todsacerdoti•11h ago

Comments

Jtsummers•11h ago
2021 with two past discussions:

https://news.ycombinator.com/item?id=27892615 - 168 comments

https://news.ycombinator.com/item?id=27891102 - 16 comments

ironmagma•9h ago
Science is amazing because it sucks and yet it's somehow still better than anything else we came up with for thousands of years.
pols45•8h ago
Science doesn't provide a Priest who will show up and sit with you at your time of grief or despair in handling the unpredictable. Priests in all religions are trained to occupy that space. And that is the prime reason Religions have survived for thousands of years long past the death of empires, kings and nations who all get tired or bored of showing up and occupying the unpredictability space.

Lot of that Despair is thanks to how the architecture of the chimp brain handles unpredictability over different time horizons - whats the system going to do tomorrow/next month/next year/next decades. Confidence decreases anxiety increases. You want to break the architecture keep feeding it the unpredictable.

So we get the corporal hudson in aliens cycle - "I'm ready, man. Check it out. I am the ultimate badass. State of the badass art" > unpredictability > "Whats happening man! Now what are we supposed to do? Thats it man. Game over man. Game over!"

Think about what science offers corporal hudson.

BLKNSLVR•6h ago
So, basically:

Where science can't make you better, non-science can make you feel better.

Where truth is painful, untruth can attempt to provide comfort.

(not sure how any of this relates to the comment or the article, maybe I should have just ignored this)

eviks•8h ago
We can't come up with anything better because we're using a term that would include anything better we come up with. There are religious studies in science! And if you suddenly had most of discoveries from revelation, that'd still be part of some old or new scientific discipline. So you're mostly amazed by your own vocabulary papering over all the nonsense it includes
ironmagma•7h ago
I should have specified for pedants that I meant the system of peer review and scientific inquiry.
eviks•7h ago
You're still too vague. Do you mean the <100 old peer review system? And that it's better than all the scientific discoveries of the past thousands of years?
kragen•6h ago
You think that's better than the work that preceded peer review, by people like Einstein, Bunsen, Kelvin, Planck, Darwin, Maxwell, Mendeleev, Michelson, Steinmetz, Faraday, Davy, Haber, Tesla, etc.? Because I have to say I find the pre-peer-review papers to generally be of much higher quality.
moring•5h ago
How did you come to the conclusion that those have not been peer-reviewed? Every uni course that presents the work of these people implicitly reviews it for consistency, and the advanced practices courses repeat their experiments.

Also, survivorship bias.

whatshisface•8h ago
As far as I can tell, the biggest difference between areas where the literature is a nightmare and areas where it's like math is the degree to which things self-correct due to the interlocking of disparate parts. If you publish a bad end-result study, say you're measuring the effect of an environmental toxin on human cognitive decline, that's it. If it's right it's right, if it's wrong it's wrong. In contrast, if you discover a fundamental pathway in secondary metabolite biosynthesis, nobody else's research will make sense unless you get it right.
jltsiren•7h ago
Some European languages have a word for "science". Some have a word for "Wissenschaft". I'm not aware of any language that has separate words for both concepts. Confusion ensues when "science" inevitably gets translated to "Wissenschaft", or the other way around.

Science is centered around the scientific method. A naive understanding of it can lead to an excessive focus on producing disconnected factoids called results. Wissenschaft has different failure modes, but because you are supposed to study your chosen topic systematically by any means necessary, you have to think more about what you are trying to achieve and how. For example, whether you want to produce results, explanations, case studies, models, predictions, or systems.

The literature tends to be better when people see results as intermediate steps they can build on, rather than as primary goals.

Insanity•6h ago
Huh, I never really thought deeply about this. My mother tongue is Dutch which has the word “Wetenschap” which maps directly to Wissenschaft.

But I don’t consciously distinguish that from the English “science”. Although obviously the connotation of science leans on the scientific method whilst “Wetenschap” is more on the “gaining of knowledge”.

While there is no single English-word translation I can think of, I guess “knowledge building” or “the effort to expand knowledge” might be good approximations.

Interesting, never thought about this distinction too much.

analog31•6h ago
The word "science" predates modern natural science, so I'm not sure these are really different words.

Thomas Aquinas asks if theology is a science. Spoiler alert: The answer is Yes.

CamperBob2•5h ago
Aquinas predated Popper, whose definition is more influential today. Nothing about theology is falsifiable, so no, the answer is "No."
emil-lp•3h ago
Theology is a "science" in the same way as social science is a science. They don't use the scientific approach as defined by Popper, but they still try to find out stuff in the best possible way.
throwaway290•5h ago
> Some European languages have a word for "science". Some have a word for "Wissenschaft". I'm not aware of any language that has separate words for both concepts

not really european but in russian it's neither. the word for science "наука" literally is closest to "teaching" or "education" (edit: and historically "punishment")

there is no stem for knowledge ("знать") OR science (doesn't even exist in russian) in that word:)

NoMoreNicksLeft•5h ago
>the word for science "наука"

It's literally "na-oo-ka? What in the hell is the etymology of that?

throwaway290•5h ago
the stem "ук/уч" is in "учить" (to teach or in old times to punish) and other teaching related words, idk etymology
artyom•8h ago
Science is very good.

Pseudoscience like measuring cost to fix a bug in a classroom setting is bad. Specially if it literally does "cost" and "classroom" together. That's just a sad way to grab some more research funding to keep the machine going.

The "garbage pile" of papers is not a new problem. It's been plaguing the science world for quite a long time. And it's only going to get worse because the metric and the target are the same (Goodhart's Law).

From the article itself, each mentioned paper scream of "the author never had to write actual functional code for a living" to me.

evolighting•5h ago
The "garbage pile" of papers is not a problem, they are deliverables, representing completed work for which people have been payed.

If we stop paying for those "garbages", the problem might disappear. But what about the researchers or scientists who depend on that funding to live?

What needs to change is the very way academic work is organized, but nothing comes for free.

morshu9001•5h ago
Creating metrics for code quality is about as reliable as doing it for essay quality
locknitpicker•5h ago
> The "garbage pile" of papers is not a new problem. It's been plaguing the science world for quite a long time. And it's only going to get worse because the metric and the target are the same (Goodhart's Law).

I don't think this observation is valid. Papers are not expected to be infallible truth-makers. They are literally a somewhat standardized way for anyone to address a community of their peers and say "hey guys, check out this neat thing I noticed".

Papers are subject to a review because of editorial bars from each publication (that is, you can't just write a few sentences with a crayon and expect it to be published) and the paper should not just clone whatever people already wrote before. Other than this, unless you are committing academic fraud, you can still post something you found your conclusions were off or you missed something.

artyom•40m ago
Not a bad argument in an ideal world.

In such trajectory science is meant to cross the information overload / false equivalence threshold, where the "hey, check this out" scenario won't scale and the cost of validating all other people's papers outweigh the (theoretical) gains.

Not sure if you think that threshold has been crossed already or not.

rramadass•7h ago
The article is somewhat confused. The Scientific Method (https://en.wikipedia.org/wiki/Scientific_method) itself is an empirical process with caveats. We need to use our own intelligence in judging and deciding what and how to interpret some data/hypothesis. It is "trial-and-error" but with established principles/laws/heuristics added in to guide our "trials".

Thus for example; the answer to the question "Are Late-Stage Bugs More Expensive?" is a "Yes generally" since at later stage in development we have more of design/implementation done where we now have a larger number of interacting components and thus increased Complexity. So the probability that the bug lies in the interaction/intersection of various components is higher which may require us to rework (both design and implementation) a large part of the system. This is the reason we accept separation-of-concerns, modularization and frequent feedback loops between design/implementation/testing as "standard" Software Engineering Practices.

Lee Smolin in his essay There is No Scientific Method (https://bigthink.com/articles/there-is-no-scientific-method/) states the following which i think is very applicable to "Software Engineering" since it is more of a Human Process than Hard Science;

Science works because Scientists form communities and traditions based not on a common set of methods, but a common set of ethical principles. And there are two ethical principles that I think underlie the success of science...

The first one is that we agree to tell the truth and we agree to be governed by rational argument from public evidence. So when there is a disagreement it can be resolved by referring to a rational deduction from public evidence. We agree to be so swayed.

Whether we originally came to that point of view or not to that point of view, whether that was our idea or somebody else’s idea, whether it’s our research program or a rival research program, we agree to let evidence decide. Now one sees this happening all the time in science. This is the strength of science.

The second principle is that when the evidence does not decide, when the evidence is not sufficient to decide from rational argument, whether one point of view is right or another point of view is right, we agree to encourage competition and diversification amongst the professionals in the community.

Here I have to emphasize I’m not saying that anything goes. I’m not saying that any quack, anybody without an education is equal in interest or is equal in importance to somebody with his Ph.D. and his scientific training at a university...

I’m talking about the ethics within a community of people who have accreditation and are working within the community. Within the community it’s necessary for science to progress as fast as possible, not to prematurely form paradigms, not to prematurely make up our mind that one research program is right to the exclusion of others. It’s important to encourage competition, to encourage diversification, to encourage disagreement in the effort to get us to that consensus which is governed by the first principle.

Kamq•5h ago
Oh come on. They're obviously using the word "science" in this context as a shorthand for the institutions and processes we've set up to do research. Mostly because that's too many words for a title and nobody has come up with a catchy name that's not politically coded. It's also pretty normal usage of the word out in the wild.
rramadass•2h ago
What Smolin is trying to point out are the meta principles which underlie the feedback loop of the Scientific Method itself. Once the principles are adhered to, the loop becomes commonsense. This is because all of "doing Science" are human activities where we discover knowledge through three means viz. 1) Authority(Textual/Oral) 2) Reasoning 3) Experience. All three have to be considered to come to a definite conclusion. The submitted article ignores this trifecta and seems to conflate "Empiricism" solely with external validation.
huijzer•6h ago
I did a PhD and keep saying this: the system is completely broken because the incentives are completely broken. Researchers have many things to keep in mind for their career, but truth isn’t really one of them. Citations are more important for career than truth and even false papers can get many citations.
tombert•5h ago
I didn't finish my PhD, but when I was doing it, it was upsetting how easy it was to come up with a conclusion first, and then find a paper to support it.

It was like finding out Santa Claus wasn't real to me.

112233•5h ago
This is sad for people that are not related to science. Since platonic ideal science is axiomatically infallible, by agreed definition (as is platonic free market and other similar models), it is assumed that anyone from outside the academia who is questioning whatever currently published results are is a quack, a luddite and has a psychosis. It does not matter how sus the process leading to those results is, only scientists are allowed to call out scientists
whatever1•5h ago
All of these Agile CI/CD guys should have the stats. We know exactly how many lines of code and labor hours it takes to solve a bug that was not in the test suite.
contrarian1234•5h ago
Maybe some readers will come across this and instead of foaming at the mouth want solutions

> Over time you slowly build out a list of good "node papers" (mostly literature reviews) and useful terms to speed up this process, but it's always gonna be super time consuming.

This maybe didn't exist at the time of writing of this blog post. But this is not super difficult now a days- though it does take some time. You can use services like connectedpapers.com which will build out graphs of references and will at a glance tell you which papers are sited more. You can find the more reliable stuff .. ie the "node papers".

The review paper is the traditional way. It's usually okay.. but very biased towards the author's background.

If it's very "fresh of the press" stuff then you judge it based on the journal's reputation and hope the reviews did their jobs. You will have more garbage to wade through. To me recent is generally bad...

spookie•3h ago
> This maybe didn't exist at the time of writing of this blog post. But this is not super difficult now a days- though it does take some time. You can use services like connectedpapers.com which will build out graphs of references and will at a glance tell you which papers are sited more. You can find the more reliable stuff .. ie the "node papers".

True, but the aim isn't really in finding which ones are cited the most, although it does help you in the ordeal. In a sense those tools help having a macro understanding, but are very prone to an initial seed bias. It is difficult to get out of a closed sub-section of the field. This is especially the case in technical papers, which often fail to address surrounding issues. These issues might still be technical, just not within the grasp of that particular sub-group of authors.

In the end, like you said, it is very time consuming. You do need to go through each one individually and build an understanding and intuition for what to look for, and how to get out of those "cycles" for a deeper understanding. And you really are better off reading them yourself.

> The review paper is the traditional way. It's usually okay.. but very biased towards the author's background. > > If it's very "fresh of the press" stuff then you judge it based on the journal's reputation and hope the reviews did their jobs. You will have more garbage to wade through. To me recent is generally bad...

Some guidelines like PRISMA, or the various assessments of self-bias are generally good indicators the author cared. Having sections like these will help you getting the aforementioned intuition for what else to look through, given you have a recognition from the source itself of their bias (your own assessment may be biased, so some ground truth is good). Plus really thorough description of their methods for gathering the information (databases, queries, and themes they spent time on).

Agreed recent is generally bad, you need to allow some time for things to have a chance to get looked at.