frontpage.
newsnewestaskshowjobs

Made with ♥ by @iamnishanth

Open Source @Github

fp.

What the News media thinks about your Indian stock investments

https://stocktrends.numerical.works/
1•mindaslab•35s ago•0 comments

Running Lua on a tiny console from 2001

https://ivie.codes/page/pokemon-mini-lua
1•Charmunk•1m ago•0 comments

Google and Microsoft Paying Creators $500K+ to Promote AI Tools

https://www.cnbc.com/2026/02/06/google-microsoft-pay-creators-500000-and-more-to-promote-ai.html
2•belter•3m ago•0 comments

New filtration technology could be game-changer in removal of PFAS

https://www.theguardian.com/environment/2026/jan/23/pfas-forever-chemicals-filtration
1•PaulHoule•4m ago•0 comments

Show HN: I saw this cool navigation reveal, so I made a simple HTML+CSS version

https://github.com/Momciloo/fun-with-clip-path
1•momciloo•5m ago•0 comments

Kinda Surprised by Seadance2's Moderation

https://seedanceai.me/
1•ri-vai•5m ago•1 comments

I Write Games in C (yes, C)

https://jonathanwhiting.com/writing/blog/games_in_c/
1•valyala•5m ago•0 comments

Django scales. Stop blaming the framework (part 1 of 3)

https://medium.com/@tk512/django-scales-stop-blaming-the-framework-part-1-of-3-a2b5b0ff811f
1•sgt•5m ago•0 comments

Malwarebytes Is Now in ChatGPT

https://www.malwarebytes.com/blog/product/2026/02/scam-checking-just-got-easier-malwarebytes-is-n...
1•m-hodges•5m ago•0 comments

Thoughts on the job market in the age of LLMs

https://www.interconnects.ai/p/thoughts-on-the-hiring-market-in
1•gmays•6m ago•0 comments

Show HN: Stacky – certain block game clone

https://www.susmel.com/stacky/
2•Keyframe•9m ago•0 comments

AIII: A public benchmark for AI narrative and political independence

https://github.com/GRMPZQUIDOS/AIII
1•GRMPZ23•9m ago•0 comments

SectorC: A C Compiler in 512 bytes

https://xorvoid.com/sectorc.html
2•valyala•10m ago•0 comments

The API Is a Dead End; Machines Need a Labor Economy

1•bot_uid_life•11m ago•0 comments

Digital Iris [video]

https://www.youtube.com/watch?v=Kg_2MAgS_pE
1•Jyaif•12m ago•0 comments

New wave of GLP-1 drugs is coming–and they're stronger than Wegovy and Zepbound

https://www.scientificamerican.com/article/new-glp-1-weight-loss-drugs-are-coming-and-theyre-stro...
4•randycupertino•14m ago•0 comments

Convert tempo (BPM) to millisecond durations for musical note subdivisions

https://brylie.music/apps/bpm-calculator/
1•brylie•16m ago•0 comments

Show HN: Tasty A.F.

https://tastyaf.recipes/about
1•adammfrank•17m ago•0 comments

The Contagious Taste of Cancer

https://www.historytoday.com/archive/history-matters/contagious-taste-cancer
1•Thevet•18m ago•0 comments

U.S. Jobs Disappear at Fastest January Pace Since Great Recession

https://www.forbes.com/sites/mikestunson/2026/02/05/us-jobs-disappear-at-fastest-january-pace-sin...
1•alephnerd•19m ago•1 comments

Bithumb mistakenly hands out $195M in Bitcoin to users in 'Random Box' giveaway

https://koreajoongangdaily.joins.com/news/2026-02-07/business/finance/Crypto-exchange-Bithumb-mis...
1•giuliomagnifico•19m ago•0 comments

Beyond Agentic Coding

https://haskellforall.com/2026/02/beyond-agentic-coding
3•todsacerdoti•20m ago•0 comments

OpenClaw ClawHub Broken Windows Theory – If basic sorting isn't working what is?

https://www.loom.com/embed/e26a750c0c754312b032e2290630853d
1•kaicianflone•22m ago•0 comments

OpenBSD Copyright Policy

https://www.openbsd.org/policy.html
1•Panino•23m ago•0 comments

OpenClaw Creator: Why 80% of Apps Will Disappear

https://www.youtube.com/watch?v=4uzGDAoNOZc
2•schwentkerr•27m ago•0 comments

What Happens When Technical Debt Vanishes?

https://ieeexplore.ieee.org/document/11316905
2•blenderob•28m ago•0 comments

AI Is Finally Eating Software's Total Market: Here's What's Next

https://vinvashishta.substack.com/p/ai-is-finally-eating-softwares-total
3•gmays•28m ago•0 comments

Computer Science from the Bottom Up

https://www.bottomupcs.com/
2•gurjeet•29m ago•0 comments

Show HN: A toy compiler I built in high school (runs in browser)

https://vire-lang.web.app
1•xeouz•30m ago•1 comments

You don't need Mac mini to run OpenClaw

https://runclaw.sh
1•rutagandasalim•31m ago•0 comments
Open in hackernews

Ask HN: What is your most disturbing moment with generative AI?

9•gardnr•6mo ago

Comments

bearjaws•6mo ago
Doing a project to migrate from one LMS to another, I put ChatGPT in the middle to fix various mistakes in the content, add alt text for images, transcribe audio, etc.

When importing the content back into Moodle, I come to find that one of the transcripts is 30k+ characters, and errored out on import.

For whatever reason, it got stuck in a loop that started like this:

"And since the dawn of time, wow time, its so important, time is so important. What is time, time is so important, theres not enough time, time is so important time"... repeat "time is so important" until token limit.

This really gave me a bit of existential dread.

lynx97•6mo ago
Try reducing temperature. The default of 1.0 is sometimes to "creative". Setting it to 0.5 or somesuch should reduce events like you described.
bearjaws•6mo ago
Was already running .1 or .2 because I didn't want it to deviate far from source content.
alganet•6mo ago
Nothing is disturbing.
theothertimcook•6mo ago
How much I've come to trust the answers, responses, and information it feeds me for my increasingly frequent queries.
rotexo•6mo ago
I find myself occasionally wondering if 8.11 is in fact greater than 8.9
diatone•6mo ago
Deep fakes have always been horrible. The idea that someone - anyone - can take your image and represent you in ways that can ruin your reputation, is appalling. For example, revenge porn.

Having your likeness used to express an opinion that is the opposite of your own is nasty too. You can produce the kind of thing that has no courtesy, no grace, no kindness or care for the people around you.

The mass extraction and substitution of art has also caused a lot of unnecessary grief. Instead of AI enabling us to pursue creative work… it’s producing slop and making it harder for newbies to develop their craft. And making a lot of people anxious, fearful, and angry.

And finally of course astroturfing, phishing, that kind of thing has in principle become a lot more sophisticated.

It unnerves me that people can pull this capital lever against each other in ways that don’t obviously advance the common good.

dgunay•6mo ago
I saw an AI generated video the other day of security camera footage of a group of people attempting to rob a store, then running away after the owner shoots at them with a gun. The graininess and low framerate of the video made it a lot harder to tell that it was AI generated than the usual shiny, high res, oddly smooth AI look. There were only very subtle tells - non-reaction of bystanders in the background, and a physics mistake that was easy to miss in the commotion.

We're very close to nearly every video on the internet being worthless as a form of proof. This bothers me a lot more than text generation because typically video is admissible as evidence in the court of law, and especially in the court of public opinion.

atleastoptimal•6mo ago
I saw that, it wasn't AI generated. There were red herrings in the compression artifacts. The real store owner spoke about the experience:

https://x.com/Rimmy_Downunder/status/1947156872198595058

(sorry about the x link couldn't find anything else)

The problem of real footage being discredited as AI is as big as the problem of AI footage being passed as real. But they're subsets of the larger problem: AI can simulate all costly signals of value very cheaply, leading to all the inertia dependent on the costliness of those channels breaking down. This is true for epistemics, but also social bonds (chatbots), credentials, experience and education (AI performing better on many knowledge tasks than experienced humans), and others.

ginayuksel•6mo ago
I once tried prompting an LLM to summarize a blog I had written myself, not only did it fail to recognize the main argument, it confidently hallucinated a completely unrelated conclusion. It was disturbing not because it was wrong, but because it sounded so right.

That moment made me question how easily AI can shape narratives when the user isn’t aware of the original content.

orangepush•6mo ago
I asked an AI to help me draft an onboarding email for a new feature. It wrote something so human-like, so emotionally aware, that I felt oddly… replaced.

It wasn’t just about writing, it felt like it understood the intention behind the message better than I did. That was the first time I questioned where we’re headed.

TXTOS•6mo ago
Honestly, the most disturbing moment for me wasn’t an answer gone wrong — it was realizing why it went wrong.

Most generative AI hallucinations aren’t just data errors. They happen because the language model hits a semantic dead-end — a kind of “collapse” where it can't reconcile competing meanings and defaults to whatever sounds fluent.

We’re building WFGY, a reasoning system that catches these failure points before they explode. It tracks meaning across documents and across time, even when formatting, structure, or logic goes off the rails.

The scariest part? Language never promised to stay consistent. Most models assume it does. We don’t.

Backed by the creator of tesseract.js (36k) More info: https://github.com/onestardao/WFGY