frontpage.
newsnewestaskshowjobs

Made with ♥ by @iamnishanth

Open Source @Github

fp.

Study confirms experience beats youthful enthusiasm

https://www.theregister.com/2026/02/07/boomers_vs_zoomers_workplace/
1•Willingham•27s ago•0 comments

The Big Hunger by Walter J Miller, Jr. (1952)

https://lauriepenny.substack.com/p/the-big-hunger
1•shervinafshar•1m ago•0 comments

The Genus Amanita

https://www.mushroomexpert.com/amanita.html
1•rolph•6m ago•0 comments

We have broken SHA-1 in practice

https://shattered.io/
1•mooreds•7m ago•1 comments

Ask HN: Was my first management job bad, or is this what management is like?

1•Buttons840•8m ago•0 comments

Ask HN: How to Reduce Time Spent Crimping?

1•pinkmuffinere•9m ago•0 comments

KV Cache Transform Coding for Compact Storage in LLM Inference

https://arxiv.org/abs/2511.01815
1•walterbell•14m ago•0 comments

A quantitative, multimodal wearable bioelectronic device for stress assessment

https://www.nature.com/articles/s41467-025-67747-9
1•PaulHoule•16m ago•0 comments

Why Big Tech Is Throwing Cash into India in Quest for AI Supremacy

https://www.wsj.com/world/india/why-big-tech-is-throwing-cash-into-india-in-quest-for-ai-supremac...
1•saikatsg•16m ago•0 comments

How to shoot yourself in the foot – 2026 edition

https://github.com/aweussom/HowToShootYourselfInTheFoot
1•aweussom•16m ago•0 comments

Eight More Months of Agents

https://crawshaw.io/blog/eight-more-months-of-agents
3•archb•18m ago•0 comments

From Human Thought to Machine Coordination

https://www.psychologytoday.com/us/blog/the-digital-self/202602/from-human-thought-to-machine-coo...
1•walterbell•18m ago•0 comments

The new X API pricing must be a joke

https://developer.x.com/
1•danver0•19m ago•0 comments

Show HN: RMA Dashboard fast SAST results for monorepos (SARIF and triage)

https://rma-dashboard.bukhari-kibuka7.workers.dev/
1•bumahkib7•20m ago•0 comments

Show HN: Source code graphRAG for Java/Kotlin development based on jQAssistant

https://github.com/2015xli/jqassistant-graph-rag
1•artigent•25m ago•0 comments

Python Only Has One Real Competitor

https://mccue.dev/pages/2-6-26-python-competitor
3•dragandj•26m ago•0 comments

Tmux to Zellij (and Back)

https://www.mauriciopoppe.com/notes/tmux-to-zellij/
1•maurizzzio•27m ago•1 comments

Ask HN: How are you using specialized agents to accelerate your work?

1•otterley•28m ago•0 comments

Passing user_id through 6 services? OTel Baggage fixes this

https://signoz.io/blog/otel-baggage/
1•pranay01•29m ago•0 comments

DavMail Pop/IMAP/SMTP/Caldav/Carddav/LDAP Exchange Gateway

https://davmail.sourceforge.net/
1•todsacerdoti•30m ago•0 comments

Visual data modelling in the browser (open source)

https://github.com/sqlmodel/sqlmodel
1•Sean766•32m ago•0 comments

Show HN: Tharos – CLI to find and autofix security bugs using local LLMs

https://github.com/chinonsochikelue/tharos
1•fluantix•32m ago•0 comments

Oddly Simple GUI Programs

https://simonsafar.com/2024/win32_lights/
1•MaximilianEmel•33m ago•0 comments

The New Playbook for Leaders [pdf]

https://www.ibli.com/IBLI%20OnePagers%20The%20Plays%20Summarized.pdf
1•mooreds•33m ago•1 comments

Interactive Unboxing of J Dilla's Donuts

https://donuts20.vercel.app
1•sngahane•34m ago•0 comments

OneCourt helps blind and low-vision fans to track Super Bowl live

https://www.dezeen.com/2026/02/06/onecourt-tactile-device-super-bowl-blind-low-vision-fans/
1•gaws•36m ago•0 comments

Rudolf Vrba

https://en.wikipedia.org/wiki/Rudolf_Vrba
1•mooreds•37m ago•0 comments

Autism Incidence in Girls and Boys May Be Nearly Equal, Study Suggests

https://www.medpagetoday.com/neurology/autism/119747
1•paulpauper•37m ago•0 comments

Wellness Hotels Discovery Application

https://aurio.place/
1•cherrylinedev•38m ago•1 comments

NASA delays moon rocket launch by a month after fuel leaks during test

https://www.theguardian.com/science/2026/feb/03/nasa-delays-moon-rocket-launch-month-fuel-leaks-a...
1•mooreds•39m ago•0 comments
Open in hackernews

Ask HN: What is your most disturbing moment with generative AI?

9•gardnr•6mo ago

Comments

bearjaws•6mo ago
Doing a project to migrate from one LMS to another, I put ChatGPT in the middle to fix various mistakes in the content, add alt text for images, transcribe audio, etc.

When importing the content back into Moodle, I come to find that one of the transcripts is 30k+ characters, and errored out on import.

For whatever reason, it got stuck in a loop that started like this:

"And since the dawn of time, wow time, its so important, time is so important. What is time, time is so important, theres not enough time, time is so important time"... repeat "time is so important" until token limit.

This really gave me a bit of existential dread.

lynx97•6mo ago
Try reducing temperature. The default of 1.0 is sometimes to "creative". Setting it to 0.5 or somesuch should reduce events like you described.
bearjaws•6mo ago
Was already running .1 or .2 because I didn't want it to deviate far from source content.
alganet•6mo ago
Nothing is disturbing.
theothertimcook•6mo ago
How much I've come to trust the answers, responses, and information it feeds me for my increasingly frequent queries.
rotexo•6mo ago
I find myself occasionally wondering if 8.11 is in fact greater than 8.9
diatone•6mo ago
Deep fakes have always been horrible. The idea that someone - anyone - can take your image and represent you in ways that can ruin your reputation, is appalling. For example, revenge porn.

Having your likeness used to express an opinion that is the opposite of your own is nasty too. You can produce the kind of thing that has no courtesy, no grace, no kindness or care for the people around you.

The mass extraction and substitution of art has also caused a lot of unnecessary grief. Instead of AI enabling us to pursue creative work… it’s producing slop and making it harder for newbies to develop their craft. And making a lot of people anxious, fearful, and angry.

And finally of course astroturfing, phishing, that kind of thing has in principle become a lot more sophisticated.

It unnerves me that people can pull this capital lever against each other in ways that don’t obviously advance the common good.

dgunay•6mo ago
I saw an AI generated video the other day of security camera footage of a group of people attempting to rob a store, then running away after the owner shoots at them with a gun. The graininess and low framerate of the video made it a lot harder to tell that it was AI generated than the usual shiny, high res, oddly smooth AI look. There were only very subtle tells - non-reaction of bystanders in the background, and a physics mistake that was easy to miss in the commotion.

We're very close to nearly every video on the internet being worthless as a form of proof. This bothers me a lot more than text generation because typically video is admissible as evidence in the court of law, and especially in the court of public opinion.

atleastoptimal•6mo ago
I saw that, it wasn't AI generated. There were red herrings in the compression artifacts. The real store owner spoke about the experience:

https://x.com/Rimmy_Downunder/status/1947156872198595058

(sorry about the x link couldn't find anything else)

The problem of real footage being discredited as AI is as big as the problem of AI footage being passed as real. But they're subsets of the larger problem: AI can simulate all costly signals of value very cheaply, leading to all the inertia dependent on the costliness of those channels breaking down. This is true for epistemics, but also social bonds (chatbots), credentials, experience and education (AI performing better on many knowledge tasks than experienced humans), and others.

ginayuksel•6mo ago
I once tried prompting an LLM to summarize a blog I had written myself, not only did it fail to recognize the main argument, it confidently hallucinated a completely unrelated conclusion. It was disturbing not because it was wrong, but because it sounded so right.

That moment made me question how easily AI can shape narratives when the user isn’t aware of the original content.

orangepush•6mo ago
I asked an AI to help me draft an onboarding email for a new feature. It wrote something so human-like, so emotionally aware, that I felt oddly… replaced.

It wasn’t just about writing, it felt like it understood the intention behind the message better than I did. That was the first time I questioned where we’re headed.

TXTOS•6mo ago
Honestly, the most disturbing moment for me wasn’t an answer gone wrong — it was realizing why it went wrong.

Most generative AI hallucinations aren’t just data errors. They happen because the language model hits a semantic dead-end — a kind of “collapse” where it can't reconcile competing meanings and defaults to whatever sounds fluent.

We’re building WFGY, a reasoning system that catches these failure points before they explode. It tracks meaning across documents and across time, even when formatting, structure, or logic goes off the rails.

The scariest part? Language never promised to stay consistent. Most models assume it does. We don’t.

Backed by the creator of tesseract.js (36k) More info: https://github.com/onestardao/WFGY