frontpage.
newsnewestaskshowjobs

Made with ♥ by @iamnishanth

Open Source @Github

fp.

NextMatch – 5-minute video speed dating to reduce ghosting

https://nextmatchdating.netlify.app/
1•Halinani8•50s ago•1 comments

Personalizing esketamine treatment in TRD and TRBD

https://www.frontiersin.org/articles/10.3389/fpsyt.2025.1736114
1•PaulHoule•2m ago•0 comments

SpaceKit.xyz – a browser‑native VM for decentralized compute

https://spacekit.xyz
1•astorrivera•2m ago•1 comments

NotebookLM: The AI that only learns from you

https://byandrev.dev/en/blog/what-is-notebooklm
1•byandrev•3m ago•1 comments

Show HN: An open-source starter kit for developing with Postgres and ClickHouse

https://github.com/ClickHouse/postgres-clickhouse-stack
1•saisrirampur•3m ago•0 comments

Game Boy Advance d-pad capacitor measurements

https://gekkio.fi/blog/2026/game-boy-advance-d-pad-capacitor-measurements/
1•todsacerdoti•4m ago•0 comments

South Korean crypto firm accidentally sends $44B in bitcoins to users

https://www.reuters.com/world/asia-pacific/crypto-firm-accidentally-sends-44-billion-bitcoins-use...
1•layer8•4m ago•0 comments

Apache Poison Fountain

https://gist.github.com/jwakely/a511a5cab5eb36d088ecd1659fcee1d5
1•atomic128•6m ago•1 comments

Web.whatsapp.com appears to be having issues syncing and sending messages

http://web.whatsapp.com
1•sabujp•7m ago•2 comments

Google in Your Terminal

https://gogcli.sh/
1•johlo•8m ago•0 comments

Shannon: Claude Code for Pen Testing: #1 on Github today

https://github.com/KeygraphHQ/shannon
1•hendler•8m ago•0 comments

Anthropic: Latest Claude model finds more than 500 vulnerabilities

https://www.scworld.com/news/anthropic-latest-claude-model-finds-more-than-500-vulnerabilities
2•Bender•13m ago•0 comments

Brooklyn cemetery plans human composting option, stirring interest and debate

https://www.cbsnews.com/newyork/news/brooklyn-green-wood-cemetery-human-composting/
1•geox•13m ago•0 comments

Why the 'Strivers' Are Right

https://greyenlightenment.com/2026/02/03/the-strivers-were-right-all-along/
1•paulpauper•14m ago•0 comments

Brain Dumps as a Literary Form

https://davegriffith.substack.com/p/brain-dumps-as-a-literary-form
1•gmays•15m ago•0 comments

Agentic Coding and the Problem of Oracles

https://epkconsulting.substack.com/p/agentic-coding-and-the-problem-of
1•qingsworkshop•15m ago•0 comments

Malicious packages for dYdX cryptocurrency exchange empties user wallets

https://arstechnica.com/security/2026/02/malicious-packages-for-dydx-cryptocurrency-exchange-empt...
1•Bender•15m ago•0 comments

Show HN: I built a <400ms latency voice agent that runs on a 4gb vram GTX 1650"

https://github.com/pheonix-delta/axiom-voice-agent
1•shubham-coder•16m ago•0 comments

Penisgate erupts at Olympics; scandal exposes risks of bulking your bulge

https://arstechnica.com/health/2026/02/penisgate-erupts-at-olympics-scandal-exposes-risks-of-bulk...
4•Bender•17m ago•0 comments

Arcan Explained: A browser for different webs

https://arcan-fe.com/2026/01/26/arcan-explained-a-browser-for-different-webs/
1•fanf2•18m ago•0 comments

What did we learn from the AI Village in 2025?

https://theaidigest.org/village/blog/what-we-learned-2025
1•mrkO99•19m ago•0 comments

An open replacement for the IBM 3174 Establishment Controller

https://github.com/lowobservable/oec
1•bri3d•21m ago•0 comments

The P in PGP isn't for pain: encrypting emails in the browser

https://ckardaris.github.io/blog/2026/02/07/encrypted-email.html
2•ckardaris•23m ago•0 comments

Show HN: Mirror Parliament where users vote on top of politicians and draft laws

https://github.com/fokdelafons/lustra
1•fokdelafons•24m ago•1 comments

Ask HN: Opus 4.6 ignoring instructions, how to use 4.5 in Claude Code instead?

1•Chance-Device•25m ago•0 comments

We Mourn Our Craft

https://nolanlawson.com/2026/02/07/we-mourn-our-craft/
1•ColinWright•28m ago•0 comments

Jim Fan calls pixels the ultimate motor controller

https://robotsandstartups.substack.com/p/humanoids-platform-urdf-kitchen-nvidias
1•robotlaunch•31m ago•0 comments

Exploring a Modern SMTPE 2110 Broadcast Truck with My Dad

https://www.jeffgeerling.com/blog/2026/exploring-a-modern-smpte-2110-broadcast-truck-with-my-dad/
1•HotGarbage•32m ago•0 comments

AI UX Playground: Real-world examples of AI interaction design

https://www.aiuxplayground.com/
1•javiercr•32m ago•0 comments

The Field Guide to Design Futures

https://designfutures.guide/
1•andyjohnson0•33m ago•0 comments
Open in hackernews

DeepSeek may have used Google's Gemini to train its latest model

https://techcrunch.com/2025/06/03/deepseek-may-have-used-googles-gemini-to-train-its-latest-model/
29•samsmithy•8mo ago

Comments

parineum•8mo ago
At this point, they all using each other because so much of the new content they are scraping for data is generated.

These models will converge and plateau because the datasets are only going to get worse as more of their content is incestuous.

wkat4242•8mo ago
Yes indeed some studies were already done on this.
sovietmudkipz•8mo ago
I recall that AI trained on AI output over many cycles eventually becomes something akin to noise texture as the output degrades rapidly.

Won’t most AI produced content put out into the public be human curated, thus heavily mitigating this degradation effect? If we’re going to see a full length AI generated movie it seems like humans will be heavily involved, hand holding the output and throwing out the AI’s nonsense.

AstroBen•8mo ago
Some will be heavily curated, by those who care about quality. This is a lot slower to produce, requires some expertise to do right, so there will be far less of it

The vast majority of content will be (is) the fastest and easiest to create - AI slop

jsheard•8mo ago
The default Llama 4 system prompt even instructs it to avoid using various ChatGPT-isms, presumably because they've already scraped so much GPT-generated material that it noticably skews their models output.
zackangelo•8mo ago
There might be a plateau coming but I’m not sure that will be the reason.

It seems counterintuitive but there is some research suggesting that using synthetic data might actually be productive.

jsheard•8mo ago
I think there's probably a distinction to be made between deliberate, careful use of synthetic data, as opposed to blindly scraping 1PB of LLM generated SEO spam and force-feeding it into a new model. Maybe the former is useful, but the latter... probably not.
philipwhiuk•8mo ago
> Distillation isn’t an uncommon practice, but OpenAI’s terms of service prohibit customers from using the company’s model outputs to build competing AI.

I have the absolute tiniest of violins for this given OpenAI's behaviour vs everyone else's terms of service.

sovietmudkipz•8mo ago
“Copyright must evolve into the 21century (…so that AI can legally steal everything produced by people”

And also “Don’t steal our AI!”

jsheard•8mo ago
The world is not prepared for the mental gymnastics that OpenAI/Google/etc will employ to defend their copyright if their big models ever get leaked.
bitpush•8mo ago
I see no evidence that Google is doing this. Any sources?
Zetaphor•8mo ago
I'm still unclear how they are able to claim this considering their raw thinking traces were never exposed to the end user, only summaries.
dev_hugepages•8mo ago
The raw thinking traces were shown previously in the API.
vb-8448•8mo ago
I wonder if at this point it really matters who used whose data ...
ksymph•8mo ago
Interesting. The tonal change has definitely been noticeable. It also seems a bit more succinct and precise with its word choice, less flowery. That does seem to be in line with Gemini's behavior.
hulitu•8mo ago
> Sam Paech, a Melbourne-based developer who creates “emotional intelligence” evaluations for AI, published what he claims is evidence that DeepSeek’s latest model was trained on outputs from Gemini. DeepSeek’s model, called R1-0528, prefers words and expressions similar to those that Google’s Gemini 2.5 Pro favors, said Paech in an X post.

And if you search for personal information of Android users, including location, sex, political orientation and location data, it is all there. /s