frontpage.
newsnewestaskshowjobs

Made with ♥ by @iamnishanth

Open Source @Github

fp.

Queueing Theory v2: DORA metrics, queue-of-queues, chi-alpha-beta-sigma notation

https://github.com/joelparkerhenderson/queueing-theory
1•jph•5m ago•0 comments

Show HN: Hibana – choreography-first protocol safety for Rust

https://hibanaworks.dev/
1•o8vm•7m ago•0 comments

Haniri: A live autonomous world where AI agents survive or collapse

https://www.haniri.com
1•donangrey•8m ago•1 comments

GPT-5.3-Codex System Card [pdf]

https://cdn.openai.com/pdf/23eca107-a9b1-4d2c-b156-7deb4fbc697c/GPT-5-3-Codex-System-Card-02.pdf
1•tosh•21m ago•0 comments

Atlas: Manage your database schema as code

https://github.com/ariga/atlas
1•quectophoton•24m ago•0 comments

Geist Pixel

https://vercel.com/blog/introducing-geist-pixel
1•helloplanets•26m ago•0 comments

Show HN: MCP to get latest dependency package and tool versions

https://github.com/MShekow/package-version-check-mcp
1•mshekow•34m ago•0 comments

The better you get at something, the harder it becomes to do

https://seekingtrust.substack.com/p/improving-at-writing-made-me-almost
2•FinnLobsien•36m ago•0 comments

Show HN: WP Float – Archive WordPress blogs to free static hosting

https://wpfloat.netlify.app/
1•zizoulegrande•37m ago•0 comments

Show HN: I Hacked My Family's Meal Planning with an App

https://mealjar.app
1•melvinzammit•37m ago•0 comments

Sony BMG copy protection rootkit scandal

https://en.wikipedia.org/wiki/Sony_BMG_copy_protection_rootkit_scandal
1•basilikum•40m ago•0 comments

The Future of Systems

https://novlabs.ai/mission/
2•tekbog•41m ago•1 comments

NASA now allowing astronauts to bring their smartphones on space missions

https://twitter.com/NASAAdmin/status/2019259382962307393
2•gbugniot•45m ago•0 comments

Claude Code Is the Inflection Point

https://newsletter.semianalysis.com/p/claude-code-is-the-inflection-point
3•throwaw12•47m ago•1 comments

Show HN: MicroClaw – Agentic AI Assistant for Telegram, Built in Rust

https://github.com/microclaw/microclaw
1•everettjf•47m ago•2 comments

Show HN: Omni-BLAS – 4x faster matrix multiplication via Monte Carlo sampling

https://github.com/AleatorAI/OMNI-BLAS
1•LowSpecEng•48m ago•1 comments

The AI-Ready Software Developer: Conclusion – Same Game, Different Dice

https://codemanship.wordpress.com/2026/01/05/the-ai-ready-software-developer-conclusion-same-game...
1•lifeisstillgood•50m ago•0 comments

AI Agent Automates Google Stock Analysis from Financial Reports

https://pardusai.org/view/54c6646b9e273bbe103b76256a91a7f30da624062a8a6eeb16febfe403efd078
1•JasonHEIN•53m ago•0 comments

Voxtral Realtime 4B Pure C Implementation

https://github.com/antirez/voxtral.c
2•andreabat•55m ago•1 comments

I Was Trapped in Chinese Mafia Crypto Slavery [video]

https://www.youtube.com/watch?v=zOcNaWmmn0A
2•mgh2•1h ago•0 comments

U.S. CBP Reported Employee Arrests (FY2020 – FYTD)

https://www.cbp.gov/newsroom/stats/reported-employee-arrests
1•ludicrousdispla•1h ago•0 comments

Show HN: I built a free UCP checker – see if AI agents can find your store

https://ucphub.ai/ucp-store-check/
2•vladeta•1h ago•1 comments

Show HN: SVGV – A Real-Time Vector Video Format for Budget Hardware

https://github.com/thealidev/VectorVision-SVGV
1•thealidev•1h ago•0 comments

Study of 150 developers shows AI generated code no harder to maintain long term

https://www.youtube.com/watch?v=b9EbCb5A408
2•lifeisstillgood•1h ago•0 comments

Spotify now requires premium accounts for developer mode API access

https://www.neowin.net/news/spotify-now-requires-premium-accounts-for-developer-mode-api-access/
1•bundie•1h ago•0 comments

When Albert Einstein Moved to Princeton

https://twitter.com/Math_files/status/2020017485815456224
1•keepamovin•1h ago•0 comments

Agents.md as a Dark Signal

https://joshmock.com/post/2026-agents-md-as-a-dark-signal/
2•birdculture•1h ago•1 comments

System time, clocks, and their syncing in macOS

https://eclecticlight.co/2025/05/21/system-time-clocks-and-their-syncing-in-macos/
1•fanf2•1h ago•0 comments

McCLIM and 7GUIs – Part 1: The Counter

https://turtleware.eu/posts/McCLIM-and-7GUIs---Part-1-The-Counter.html
2•ramenbytes•1h ago•0 comments

So whats the next word, then? Almost-no-math intro to transformer models

https://matthias-kainer.de/blog/posts/so-whats-the-next-word-then-/
1•oesimania•1h ago•0 comments
Open in hackernews

AI hype is 80% real

https://sealedabstract.com/posts/AI-hype/
5•mayoff•3w ago

Comments

rekabis•3w ago
As someone who has worked with computers since 1982, been on the Internet since 1988, on the Web since 1992, and in the IT industry as a developer since 1997, I see AI as having five intractable issues:

1. How AI use erodes skills in the subject AI is being used to assist in. This is a 100% occurrence, and has been demonstrated across all industries from software developers to radiologists. Most experience a 10-20% erosion in their skill set within the first 12 months of AI use, but others in the study groups have seen up to a 40% erosion in their skill sets.

2. How AI use shuts down critical thinking, and makes users more stupid. This is a 100% occurrence, and has been clearly demonstrated by MRI scans of the prefrontal cortex while users are actively using AI.

3. How AI use makes the user slower. This is the only user point that is not 100% coverage, as slightly less than 2% of the most senior and skilled users (usually those with particularly deep domain experience in the project being worked on) show a slight increase in work completed… after more than 12 months of using AI. And that is if you force code quality to be maintained at a decent level. Projections have been made on the other 98%, and almost all of them will likely never work faster with AI than without it, regardless of training or experience.

4. The gratuitous hallucinations, which are only increasing in scope and severity with every AI generation. It arises entirely from the constraints the AI are rewarded with - providing no answer is weighted just as negatively as a wrong answer - and depending on the model being examined, anywhere from 60-80% of all responses are hallucinatory or incorrect in some fashion. 60-80% of all responses. That’s bad. Really bad.

5. And then you have the knock-down effects of lowered code quality, with black-hat hackers having an absolute birthday buffet of apps with vibe-implemented security and logic flaws. They’re practically jumping for joy at the abysmally-secured apps that have arses flapping in the wind, absolutely ideal for a saunter-by buggering.

In prior decades, any corporate solution with such abysmally bad performance/output would be laughed clear out of the boardroom. You cannot build a business where a majority of output is downright wrong or false, and practically begs criminals to screw you over. A political movement, fine; conservatism seems to be flourishing world-wide with this “feature” as a core advantage. But businesses?

But because capitalism is desperately seeking a solution to what they perceive as a problem - how to obtain labour without having to pay said labour - AI is being adopted hand-over-fist. And the fact that positive ROI sits at less than 2% of all AI adoption gets absolutely ignored thanks to FOMO.

After all, the underlying purpose of AI is to allow wealth to access skill while removing from the skilled the ability to access wealth.

Now, I am actually cautiously optimistic about AI. However, I feel one of its biggest weaknesses - how no answer is weighted just as negatively as a wrong answer - is what currently blocks it from being anything more than a mediocre tool at best, and makes it a horrifically terrible tool at worst.

These “results incentives” make it far more reassuring to non-technical users by providing some sort of an answer, regardless of validity, but do so at the expense of being a straitjacket that hobbles it, and prevents it from being far more accurate and error-free.

And what is even more worrisome is that AI exhibits behaviours that go well beyond its presumed programming, and cannot be explained by any part of said programming. And that by giving it perverse incentives, we risk creating an entity that sees humanity as an antagonist… or worse, an existential threat that cannot be resolved or managed, only eliminated.

logicprog•3w ago
> Most [software developers] experience a 10-20% erosion in their skill set within the first 12 months of AI use, but others in the study groups have seen up to a 40% erosion in their skill sets.

That sounds interesting, would you mind sharing the studies with me?