frontpage.
newsnewestaskshowjobs

Made with ♥ by @iamnishanth

Open Source @Github

fp.

Show HN: Writtte – Draft and publish articles without reformatting, anywhere

https://writtte.xyz
1•lasgawe•37s ago•0 comments

Portuguese icon (FROM A CAN) makes a simple meal (Canned Fish Files) [video]

https://www.youtube.com/watch?v=e9FUdOfp8ME
1•zeristor•2m ago•0 comments

Brookhaven Lab's RHIC Concludes 25-Year Run with Final Collisions

https://www.hpcwire.com/off-the-wire/brookhaven-labs-rhic-concludes-25-year-run-with-final-collis...
1•gnufx•4m ago•0 comments

Transcribe your aunts post cards with Gemini 3 Pro

https://leserli.ch/ocr/
1•nielstron•8m ago•0 comments

.72% Variance Lance

1•mav5431•9m ago•0 comments

ReKindle – web-based operating system designed specifically for E-ink devices

https://rekindle.ink
1•JSLegendDev•11m ago•0 comments

Encrypt It

https://encryptitalready.org/
1•u1hcw9nx•11m ago•1 comments

NextMatch – 5-minute video speed dating to reduce ghosting

https://nextmatchdating.netlify.app/
1•Halinani8•11m ago•1 comments

Personalizing esketamine treatment in TRD and TRBD

https://www.frontiersin.org/articles/10.3389/fpsyt.2025.1736114
1•PaulHoule•13m ago•0 comments

SpaceKit.xyz – a browser‑native VM for decentralized compute

https://spacekit.xyz
1•astorrivera•14m ago•1 comments

NotebookLM: The AI that only learns from you

https://byandrev.dev/en/blog/what-is-notebooklm
1•byandrev•14m ago•1 comments

Show HN: An open-source starter kit for developing with Postgres and ClickHouse

https://github.com/ClickHouse/postgres-clickhouse-stack
1•saisrirampur•14m ago•0 comments

Game Boy Advance d-pad capacitor measurements

https://gekkio.fi/blog/2026/game-boy-advance-d-pad-capacitor-measurements/
1•todsacerdoti•15m ago•0 comments

South Korean crypto firm accidentally sends $44B in bitcoins to users

https://www.reuters.com/world/asia-pacific/crypto-firm-accidentally-sends-44-billion-bitcoins-use...
2•layer8•16m ago•0 comments

Apache Poison Fountain

https://gist.github.com/jwakely/a511a5cab5eb36d088ecd1659fcee1d5
1•atomic128•17m ago•2 comments

Web.whatsapp.com appears to be having issues syncing and sending messages

http://web.whatsapp.com
1•sabujp•18m ago•2 comments

Google in Your Terminal

https://gogcli.sh/
1•johlo•19m ago•0 comments

Shannon: Claude Code for Pen Testing: #1 on Github today

https://github.com/KeygraphHQ/shannon
1•hendler•19m ago•0 comments

Anthropic: Latest Claude model finds more than 500 vulnerabilities

https://www.scworld.com/news/anthropic-latest-claude-model-finds-more-than-500-vulnerabilities
2•Bender•24m ago•0 comments

Brooklyn cemetery plans human composting option, stirring interest and debate

https://www.cbsnews.com/newyork/news/brooklyn-green-wood-cemetery-human-composting/
1•geox•24m ago•0 comments

Why the 'Strivers' Are Right

https://greyenlightenment.com/2026/02/03/the-strivers-were-right-all-along/
1•paulpauper•25m ago•0 comments

Brain Dumps as a Literary Form

https://davegriffith.substack.com/p/brain-dumps-as-a-literary-form
1•gmays•26m ago•0 comments

Agentic Coding and the Problem of Oracles

https://epkconsulting.substack.com/p/agentic-coding-and-the-problem-of
1•qingsworkshop•26m ago•0 comments

Malicious packages for dYdX cryptocurrency exchange empties user wallets

https://arstechnica.com/security/2026/02/malicious-packages-for-dydx-cryptocurrency-exchange-empt...
1•Bender•26m ago•0 comments

Show HN: I built a <400ms latency voice agent that runs on a 4gb vram GTX 1650"

https://github.com/pheonix-delta/axiom-voice-agent
1•shubham-coder•27m ago•0 comments

Penisgate erupts at Olympics; scandal exposes risks of bulking your bulge

https://arstechnica.com/health/2026/02/penisgate-erupts-at-olympics-scandal-exposes-risks-of-bulk...
4•Bender•28m ago•0 comments

Arcan Explained: A browser for different webs

https://arcan-fe.com/2026/01/26/arcan-explained-a-browser-for-different-webs/
1•fanf2•29m ago•0 comments

What did we learn from the AI Village in 2025?

https://theaidigest.org/village/blog/what-we-learned-2025
1•mrkO99•30m ago•0 comments

An open replacement for the IBM 3174 Establishment Controller

https://github.com/lowobservable/oec
1•bri3d•32m ago•0 comments

The P in PGP isn't for pain: encrypting emails in the browser

https://ckardaris.github.io/blog/2026/02/07/encrypted-email.html
2•ckardaris•34m ago•0 comments
Open in hackernews

Researchers Uncover Hidden Ingredients Behind AI Creativity

https://www.quantamagazine.org/researchers-uncover-hidden-ingredients-behind-ai-creativity-20250630/
32•isaacfrond•7mo ago

Comments

MangoToupe•7mo ago
How does one distinguish between what some call "hallucinations" and creativity?
add-sub-mul-div•7mo ago
Temperature settings will not get you to David Lynch.
77pt77•7mo ago
Correct. Increasing the temperature will probably result in something that makes more sense that Lynch's output.
MangoToupe•7mo ago
Yes, because the thing we look for in art is... coherence?
mock-possum•7mo ago
It’s one thing, certainly.
jmsdnns•7mo ago
hallucinations is when we dont like it, creativity is when we do
fusionadvocate•7mo ago
You rather have a hallucinated driver or a creative driver coming your way?
yard2010•7mo ago
Hell, I don't want any AI driver coming my way.
MangoToupe•7mo ago
I'd rather have someone I can hold liable for their decisions, tbh.
jerf•7mo ago
The article is about image generators. Image generators specifically work by starting with noise and then refining the noise into an image. That's not how driving software works and this is not a relevant point.
fusionadvocate•7mo ago
Sorry, I failed to follow your reasoning. My comment had nothing to do with "driving software", it addressed the parent post by posing the question a different way.
ticulatedspline•7mo ago
Hallucinations are just lies one believes, and there's definitely overlap between creativity and lying but hallucinations tend to not have the conscious component of lying.

categorizing the difference with AI it's much the same as with a person, context. if you ask a person what's the capitol of Florida and they tell you "Pink Elephant, and the capitol building is a literal giant pink elephant with an escalator up it's trunk", my how creative, but it's a lie. But you press them and it seems they genuinely believe it and swear up and down they saw it in a book. Now it's a hallucination, though is it creative if they believe they're just regurgitating the contents of a book? technically yes but the creativity is subconscious.

Now if you asked the same person to make up a fictitious capitol to a fake state and got that answer you'd say it was creative, and not a lie or a hallucination since the context was fiction to begin with even if the source of that creative thought comes from the same place in both instances. If there's no objectively correct answer and not a copy of an exiting known then it's "creativity".

The biggest difference is hallucinations are rare in humans, above we'd probably assume the person was being flippant, or didn't know and was a pathological liar (and not a very good one). We don't associate those motives or capacity to AI though, the AI genuinely seems to think that's right, that the response is coming honestly, thus we categorize all factual errors as hallucinations.

rbanffy•7mo ago
Hallucinations is closer to being wrong as a human. At any point in time, we are sure of several things that aren't.
Retr0id•7mo ago
The paper: https://arxiv.org/pdf/2412.20292
josefritzishere•7mo ago
You can always spot AI marketing. There is this consistent misuse of words like "creativity" which implies intent. AI does not have intent or self-awareness. AI has no concept of objective reality. The word "hallucinations" has the same problem. With no concept of objective reality there is no understanding of the real and the unreal. To quote a popular article, it's bullshitting. All the LLM and algorithmic refinements only improve it's bullshitting. https://www.psypost.org/scholars-ai-isnt-hallucinating-its-b...
hopelite•7mo ago
I am leery of such a claim not just being attention bias, because although it surely is mostly AI gobbledygook, it all looks just like the marketing gobbledygook of pre-AI, ignoring any obvious AI tells.

I think you may just be noticing sloppy attention to detail, i.e., not proofing, relying on AI that is not quite ready, similar to devs just committing AI slop without review.

I suspect someone is going to train a marketing specialized AI at some point that is focused on that specific type of promotional manipulative language of marketing. But, frankly, I don’t see it being long loved either though, because I see marketing being totally nullified by AI. You don’t need marketing when humans are no longer making decisions/buying.

tedd4u•7mo ago
Karpathy: hallucination is all they do in some sense

https://simonwillison.net/2023/Dec/9/andrej-karpathy/

josefritzishere•7mo ago
100% agree
bgwalter•7mo ago
“Human and AI creativity may not be so different”

I guess they need more funding and grants. A human does not need to ingest the entire Internet in order to plagiarize what was read. A human does not need a prompt in order to take action. Two humans can have a conversation that does not collapse immediately.

These people apparently need coaching on the most basic activities. How to solve this in the future? Perhaps women should refuse to procreate with "AI" researchers, who prefer machines anyway.

scarmig•7mo ago
Your "ideas" are just regurgitations of things you read off the internet; you have no coherent theory of "creativity" beyond some ineffable reference to the sanctity of the human soul.
acedTrex•7mo ago
So true, its well known that "ideas" came around at the same time as the advent of the modern internet.
ergonaught•7mo ago
Do you have any grasp of how much stuff your brain ingested to enable you to post this?

No, clearly.

Ygg2•7mo ago
Whatever it was. It was only a fraction of what LLMs ingest.
rbanffy•7mo ago
16 hours a day of non-stop audio, video, and other sensory perception, plus dreams while you sleep. That's a lot of data. We might read fewer books, but we still get a very large training dataset.
EnergyAmy•7mo ago
Humans ingest an order of magnitude more information before becoming anywhere near as intelligent as an LLM.
soulofmischief•7mo ago
https://www.debevoise.com/insights/publications/2025/06/anth...

> This led the court to conclude that the “[a]uthors’ complaint is no different than it would be if they complained that training schoolchildren to write well would result in an explosion of competing works.”

Workaccount2•7mo ago
Don't worry, they're all just stochastic parrots[1]

[1]https://ai.vixra.org/pdf/2506.0065v1.pdf

GaggiX•7mo ago
From the paper: "Level ∞: Pattern matching with a soul (humans)"

What am I even reading ahah

Edit: Okay after reading it a bit, this paper is actually pretty funny

rbanffy•7mo ago
Can we prove we are not?
nextaccountic•7mo ago
Humans spend years training 24/7 before they can do anything useful. People train even during their sleep, in their dreams. And on top of that, we transmit culture to other people, which accelerate their training.

And that's with the huge "pre-training" data stored in our genetic code (comprising billions of years and evolution), alongside epigenetic inheritance.

tempodox•7mo ago
“Hidden ingredients” ==> none of them understand how and why any of this works (or not). They could be easily defeated by Harry Potter, because he understands magic!
empath75•7mo ago
> For example, large language models and other AI systems also appear to display creativity, but they don’t harness locality and equivariance.

"Next token" prediction is (primary) local, in the sense that the early layers are largely concerned with grammatical coherence, not semantics, and if you shifted the text input context window by a few paragraphs, it would adjust the output accordingly.

It's not _mathematically_ the same, but i do think the mechanics are similar.