frontpage.
newsnewestaskshowjobs

Made with ♥ by @iamnishanth

Open Source @Github

fp.

MS-DOS game copy protection and cracks

https://www.dosdays.co.uk/topics/game_cracks.php
1•TheCraiggers•21s ago•0 comments

Updates on GNU/Hurd progress [video]

https://fosdem.org/2026/schedule/event/7FZXHF-updates_on_gnuhurd_progress_rump_drivers_64bit_smp_...
1•birdculture•1m ago•0 comments

Epstein took a photo of his 2015 dinner with Zuckerberg and Musk

https://xcancel.com/search?f=tweets&q=davenewworld_2%2Fstatus%2F2020128223850316274
1•doener•1m ago•0 comments

MyFlames: Visualize MySQL query execution plans as interactive FlameGraphs

https://github.com/vgrippa/myflames
1•tanelpoder•2m ago•0 comments

Show HN: LLM of Babel

https://clairefro.github.io/llm-of-babel/
1•marjipan200•2m ago•0 comments

A modern iperf3 alternative with a live TUI, multi-client server, QUIC support

https://github.com/lance0/xfr
1•tanelpoder•4m ago•0 comments

Famfamfam Silk icons – also with CSS spritesheet

https://github.com/legacy-icons/famfamfam-silk
1•thunderbong•4m ago•0 comments

Apple is the only Big Tech company whose capex declined last quarter

https://sherwood.news/tech/apple-is-the-only-big-tech-company-whose-capex-declined-last-quarter/
1•elsewhen•7m ago•0 comments

Reverse-Engineering Raiders of the Lost Ark for the Atari 2600

https://github.com/joshuanwalker/Raiders2600
2•todsacerdoti•9m ago•0 comments

Show HN: Deterministic NDJSON audit logs – v1.2 update (structural gaps)

https://github.com/yupme-bot/kernel-ndjson-proofs
1•Slaine•12m ago•0 comments

The Greater Copenhagen Region could be your friend's next career move

https://www.greatercphregion.com/friend-recruiter-program
1•mooreds•13m ago•0 comments

Do Not Confirm – Fiction by OpenClaw

https://thedailymolt.substack.com/p/do-not-confirm
1•jamesjyu•13m ago•0 comments

The Analytical Profile of Peas

https://www.fossanalytics.com/en/news-articles/more-industries/the-analytical-profile-of-peas
1•mooreds•13m ago•0 comments

Hallucinations in GPT5 – Can models say "I don't know" (June 2025)

https://jobswithgpt.com/blog/llm-eval-hallucinations-t20-cricket/
1•sp1982•14m ago•0 comments

What AI is good for, according to developers

https://github.blog/ai-and-ml/generative-ai/what-ai-is-actually-good-for-according-to-developers/
1•mooreds•14m ago•0 comments

OpenAI might pivot to the "most addictive digital friend" or face extinction

https://twitter.com/lebed2045/status/2020184853271167186
1•lebed2045•15m ago•2 comments

Show HN: Know how your SaaS is doing in 30 seconds

https://anypanel.io
1•dasfelix•15m ago•0 comments

ClawdBot Ordered Me Lunch

https://nickalexander.org/drafts/auto-sandwich.html
3•nick007•16m ago•0 comments

What the News media thinks about your Indian stock investments

https://stocktrends.numerical.works/
1•mindaslab•17m ago•0 comments

Running Lua on a tiny console from 2001

https://ivie.codes/page/pokemon-mini-lua
1•Charmunk•18m ago•0 comments

Google and Microsoft Paying Creators $500K+ to Promote AI Tools

https://www.cnbc.com/2026/02/06/google-microsoft-pay-creators-500000-and-more-to-promote-ai.html
2•belter•20m ago•0 comments

New filtration technology could be game-changer in removal of PFAS

https://www.theguardian.com/environment/2026/jan/23/pfas-forever-chemicals-filtration
1•PaulHoule•21m ago•0 comments

Show HN: I saw this cool navigation reveal, so I made a simple HTML+CSS version

https://github.com/Momciloo/fun-with-clip-path
2•momciloo•22m ago•0 comments

Kinda Surprised by Seadance2's Moderation

https://seedanceai.me/
1•ri-vai•22m ago•2 comments

I Write Games in C (yes, C)

https://jonathanwhiting.com/writing/blog/games_in_c/
2•valyala•22m ago•0 comments

Django scales. Stop blaming the framework (part 1 of 3)

https://medium.com/@tk512/django-scales-stop-blaming-the-framework-part-1-of-3-a2b5b0ff811f
1•sgt•22m ago•0 comments

Malwarebytes Is Now in ChatGPT

https://www.malwarebytes.com/blog/product/2026/02/scam-checking-just-got-easier-malwarebytes-is-n...
1•m-hodges•22m ago•0 comments

Thoughts on the job market in the age of LLMs

https://www.interconnects.ai/p/thoughts-on-the-hiring-market-in
1•gmays•23m ago•0 comments

Show HN: Stacky – certain block game clone

https://www.susmel.com/stacky/
3•Keyframe•26m ago•0 comments

AIII: A public benchmark for AI narrative and political independence

https://github.com/GRMPZQUIDOS/AIII
1•GRMPZ23•26m ago•0 comments
Open in hackernews

Ask HN: What is your most disturbing moment with generative AI?

9•gardnr•6mo ago

Comments

bearjaws•6mo ago
Doing a project to migrate from one LMS to another, I put ChatGPT in the middle to fix various mistakes in the content, add alt text for images, transcribe audio, etc.

When importing the content back into Moodle, I come to find that one of the transcripts is 30k+ characters, and errored out on import.

For whatever reason, it got stuck in a loop that started like this:

"And since the dawn of time, wow time, its so important, time is so important. What is time, time is so important, theres not enough time, time is so important time"... repeat "time is so important" until token limit.

This really gave me a bit of existential dread.

lynx97•6mo ago
Try reducing temperature. The default of 1.0 is sometimes to "creative". Setting it to 0.5 or somesuch should reduce events like you described.
bearjaws•6mo ago
Was already running .1 or .2 because I didn't want it to deviate far from source content.
alganet•6mo ago
Nothing is disturbing.
theothertimcook•6mo ago
How much I've come to trust the answers, responses, and information it feeds me for my increasingly frequent queries.
rotexo•6mo ago
I find myself occasionally wondering if 8.11 is in fact greater than 8.9
diatone•6mo ago
Deep fakes have always been horrible. The idea that someone - anyone - can take your image and represent you in ways that can ruin your reputation, is appalling. For example, revenge porn.

Having your likeness used to express an opinion that is the opposite of your own is nasty too. You can produce the kind of thing that has no courtesy, no grace, no kindness or care for the people around you.

The mass extraction and substitution of art has also caused a lot of unnecessary grief. Instead of AI enabling us to pursue creative work… it’s producing slop and making it harder for newbies to develop their craft. And making a lot of people anxious, fearful, and angry.

And finally of course astroturfing, phishing, that kind of thing has in principle become a lot more sophisticated.

It unnerves me that people can pull this capital lever against each other in ways that don’t obviously advance the common good.

dgunay•6mo ago
I saw an AI generated video the other day of security camera footage of a group of people attempting to rob a store, then running away after the owner shoots at them with a gun. The graininess and low framerate of the video made it a lot harder to tell that it was AI generated than the usual shiny, high res, oddly smooth AI look. There were only very subtle tells - non-reaction of bystanders in the background, and a physics mistake that was easy to miss in the commotion.

We're very close to nearly every video on the internet being worthless as a form of proof. This bothers me a lot more than text generation because typically video is admissible as evidence in the court of law, and especially in the court of public opinion.

atleastoptimal•6mo ago
I saw that, it wasn't AI generated. There were red herrings in the compression artifacts. The real store owner spoke about the experience:

https://x.com/Rimmy_Downunder/status/1947156872198595058

(sorry about the x link couldn't find anything else)

The problem of real footage being discredited as AI is as big as the problem of AI footage being passed as real. But they're subsets of the larger problem: AI can simulate all costly signals of value very cheaply, leading to all the inertia dependent on the costliness of those channels breaking down. This is true for epistemics, but also social bonds (chatbots), credentials, experience and education (AI performing better on many knowledge tasks than experienced humans), and others.

ginayuksel•6mo ago
I once tried prompting an LLM to summarize a blog I had written myself, not only did it fail to recognize the main argument, it confidently hallucinated a completely unrelated conclusion. It was disturbing not because it was wrong, but because it sounded so right.

That moment made me question how easily AI can shape narratives when the user isn’t aware of the original content.

orangepush•6mo ago
I asked an AI to help me draft an onboarding email for a new feature. It wrote something so human-like, so emotionally aware, that I felt oddly… replaced.

It wasn’t just about writing, it felt like it understood the intention behind the message better than I did. That was the first time I questioned where we’re headed.

TXTOS•6mo ago
Honestly, the most disturbing moment for me wasn’t an answer gone wrong — it was realizing why it went wrong.

Most generative AI hallucinations aren’t just data errors. They happen because the language model hits a semantic dead-end — a kind of “collapse” where it can't reconcile competing meanings and defaults to whatever sounds fluent.

We’re building WFGY, a reasoning system that catches these failure points before they explode. It tracks meaning across documents and across time, even when formatting, structure, or logic goes off the rails.

The scariest part? Language never promised to stay consistent. Most models assume it does. We don’t.

Backed by the creator of tesseract.js (36k) More info: https://github.com/onestardao/WFGY