frontpage.
newsnewestaskshowjobs

Made with ♥ by @iamnishanth

Open Source @Github

fp.

Queueing Theory v2: DORA metrics, queue-of-queues, chi-alpha-beta-sigma notation

https://github.com/joelparkerhenderson/queueing-theory
1•jph•5m ago•0 comments

Show HN: Hibana – choreography-first protocol safety for Rust

https://hibanaworks.dev/
1•o8vm•7m ago•0 comments

Haniri: A live autonomous world where AI agents survive or collapse

https://www.haniri.com
1•donangrey•8m ago•1 comments

GPT-5.3-Codex System Card [pdf]

https://cdn.openai.com/pdf/23eca107-a9b1-4d2c-b156-7deb4fbc697c/GPT-5-3-Codex-System-Card-02.pdf
1•tosh•21m ago•0 comments

Atlas: Manage your database schema as code

https://github.com/ariga/atlas
1•quectophoton•24m ago•0 comments

Geist Pixel

https://vercel.com/blog/introducing-geist-pixel
1•helloplanets•26m ago•0 comments

Show HN: MCP to get latest dependency package and tool versions

https://github.com/MShekow/package-version-check-mcp
1•mshekow•34m ago•0 comments

The better you get at something, the harder it becomes to do

https://seekingtrust.substack.com/p/improving-at-writing-made-me-almost
2•FinnLobsien•36m ago•0 comments

Show HN: WP Float – Archive WordPress blogs to free static hosting

https://wpfloat.netlify.app/
1•zizoulegrande•37m ago•0 comments

Show HN: I Hacked My Family's Meal Planning with an App

https://mealjar.app
1•melvinzammit•38m ago•0 comments

Sony BMG copy protection rootkit scandal

https://en.wikipedia.org/wiki/Sony_BMG_copy_protection_rootkit_scandal
1•basilikum•40m ago•0 comments

The Future of Systems

https://novlabs.ai/mission/
2•tekbog•41m ago•1 comments

NASA now allowing astronauts to bring their smartphones on space missions

https://twitter.com/NASAAdmin/status/2019259382962307393
2•gbugniot•45m ago•0 comments

Claude Code Is the Inflection Point

https://newsletter.semianalysis.com/p/claude-code-is-the-inflection-point
3•throwaw12•47m ago•1 comments

Show HN: MicroClaw – Agentic AI Assistant for Telegram, Built in Rust

https://github.com/microclaw/microclaw
1•everettjf•47m ago•2 comments

Show HN: Omni-BLAS – 4x faster matrix multiplication via Monte Carlo sampling

https://github.com/AleatorAI/OMNI-BLAS
1•LowSpecEng•48m ago•1 comments

The AI-Ready Software Developer: Conclusion – Same Game, Different Dice

https://codemanship.wordpress.com/2026/01/05/the-ai-ready-software-developer-conclusion-same-game...
1•lifeisstillgood•50m ago•0 comments

AI Agent Automates Google Stock Analysis from Financial Reports

https://pardusai.org/view/54c6646b9e273bbe103b76256a91a7f30da624062a8a6eeb16febfe403efd078
1•JasonHEIN•53m ago•0 comments

Voxtral Realtime 4B Pure C Implementation

https://github.com/antirez/voxtral.c
2•andreabat•56m ago•1 comments

I Was Trapped in Chinese Mafia Crypto Slavery [video]

https://www.youtube.com/watch?v=zOcNaWmmn0A
2•mgh2•1h ago•0 comments

U.S. CBP Reported Employee Arrests (FY2020 – FYTD)

https://www.cbp.gov/newsroom/stats/reported-employee-arrests
1•ludicrousdispla•1h ago•0 comments

Show HN: I built a free UCP checker – see if AI agents can find your store

https://ucphub.ai/ucp-store-check/
2•vladeta•1h ago•1 comments

Show HN: SVGV – A Real-Time Vector Video Format for Budget Hardware

https://github.com/thealidev/VectorVision-SVGV
1•thealidev•1h ago•0 comments

Study of 150 developers shows AI generated code no harder to maintain long term

https://www.youtube.com/watch?v=b9EbCb5A408
2•lifeisstillgood•1h ago•0 comments

Spotify now requires premium accounts for developer mode API access

https://www.neowin.net/news/spotify-now-requires-premium-accounts-for-developer-mode-api-access/
1•bundie•1h ago•0 comments

When Albert Einstein Moved to Princeton

https://twitter.com/Math_files/status/2020017485815456224
1•keepamovin•1h ago•0 comments

Agents.md as a Dark Signal

https://joshmock.com/post/2026-agents-md-as-a-dark-signal/
2•birdculture•1h ago•1 comments

System time, clocks, and their syncing in macOS

https://eclecticlight.co/2025/05/21/system-time-clocks-and-their-syncing-in-macos/
1•fanf2•1h ago•0 comments

McCLIM and 7GUIs – Part 1: The Counter

https://turtleware.eu/posts/McCLIM-and-7GUIs---Part-1-The-Counter.html
2•ramenbytes•1h ago•0 comments

So whats the next word, then? Almost-no-math intro to transformer models

https://matthias-kainer.de/blog/posts/so-whats-the-next-word-then-/
1•oesimania•1h ago•0 comments
Open in hackernews

AI is hallucinating its way into research

https://thelibre.news/why-is-science-full-of-ai/
2•speckx•2mo ago

Comments

ChrisArchitect•2mo ago
Related:

Over fifty new hallucinations in ICLR 2026 submissions

https://news.ycombinator.com/item?id=46181466

yet-another-guy•2mo ago
Long ago I was active in experimental software engineering research. Publish or perish was brilliantly solved by the most successful researchers in the field who walked around with stellar bibliometrics, publishing 50+ papers a year, and secured endless rounds of funding with their status. The trick was simple: batteries of cheap students, postdocs and junior researchers/engineers operating at varying degree of independence that code and run experiments/simulations. The students/postdocs got their papers, and the scientists could salami slice the hell out of their "research". Each paper in a topic citing all of their previous papers, because of course your previous work is related work of your current work (and nobody filters out self-citations anyway). The quicker you could go through this loop of idea -> experimental validation -> results -> next idea, the higher the publication throughput. The slower link in the chain was of course transforming an idea into experimental results, hence this hierarchical structure of cheap workers in the research group.

With AI, dishing out massive amount of research in these simulation-heavy fields is trivial, and doesn't even require empire building anymore where you have to work your way through funding for your personal army. Just give an LLM the right context and examples, and you can just prompt your way through a complete article, experimental validation included. That's the real skill/brilliancy now. If you have the decency to read and refine the final outcome, at least you can claim you retained some ethical standard. Or maybe you can get AI review it (spoiler alert: program committees do that already), so that it comes up with ideas, feedback, and suggestions for improvements. And then you implement those. Or actually you have the AI implement those. And then you review it again. Or the AI does. Maybe you put that in an adversarial for loop, and collect your paper just in time to submit for the deadline -- if you don't already have an agent setup doing that for you.

Measuring the actual impact of research outside of bibliometrics has always been next to impossible, especially for high-velocity domains like CS. We're at an age where, barring ethical standards, the only deterrent preventing a researcher from using an army of LLMs to publish in his name is the fear of getting completely busted by the community. The only currency to this is your face, and your credibility. 5 years ago you still had to come up with an idea, implement/test it, then it just didn't work and kept not working despite endless re-designs, so eventually you cooked the numbers so you could submit a paper with a non-zero chance of getting published (and accumulate a non-zero chance at not perishing). Now you don't even need to cook the numbers because the opportunity cost of producing a paper with an LLM is so low that you can effortlessly iterate and expand. Negative results? Weak storyline? Uninteresting problem? Just by sheer chance some of your AI-generated stuff will get through. You're even in for the best paper award if the actual reviewers use the same LLM you used in your adversarial review loop!